Reading through Mikel’s latest profound disertation, I learned the following gems -
Mikel was an executive at AlliedSignal.
Bob Galvin asked Mikel not to publish the true derivation of Six Sigma, specifically the 1.5 shift, to keep it a mystery.
The pareto principle is the 85/15 rule.
He invented the Breakthrough Strategy.
He invented plan-train-apply-review.
He invented what is now known as Pp and Ppk.
Of course, none of the above is true.
And the proof of the 1.5 shift? Samples of 4 have a 99.73% confidence interval of +/- 1.5 sigma. No duh.What about sample of 5, 10, 15,…. What? The confidence interval gets smaller. Oh my God!
Of course there is other mumbo jumbo about variance inflation (the real villian!) and its really only about Chi-Squared.
Of course, Taguchi observed that most variance inflation is due to centering problems while Mikel was still a schoolboy. Of course, Taguchi thought that capability stats with respect to target were more important than Mikel’s invented Pp and Ppk.
Also lots of insulting comments – “to the uninformed reader”, “the advanced reader” (a discussion of the Poisson distribution), …
Another work of art and deception from Mikel. Harryites rejoice.
Hey Stan, where did you find this work of fiction? Is it small enough to post or something we have to actually spend good money for?
I wouldn’t call it good money, but it is for sale as an ebook through iSixSigma (http://www.isixsigma.com/ebooks/) for $16.95. (I would bet 90% of Reigle’s posting are really trying to get people to buy this)
Since I am getting ready to delete it from my hard drive (5s in everything we do – get rid of the crap), perhaps I can give you my electronic copy (this would be like sharing a hard copy book if I don’t keep it electronically – right?).
You should read the reviews of the book on the web site including the third one – a gushing review from our good friend, Reigle.
That would be great, thanks. You can send to my home email.
Just remember compliance with copyright laws. I have only paid for one copy so only one copy can be in use at any given time (time tested software usage rule).
Sent at 10:16.
Removed from my hard drive at 10:17.
Ball is in your court. By the way, there is nothing mathematically sophisticated here. Just a lot of “the following is intuitively obvious to most casual idiot” (paraphrased) just before the hand waving starts.
Just a couple of points here. I do acknowledge that design margins are needed. I do acknowledge that means shift and variance inflates over time. The real deal is most processes are known or knowable; most designs are evolutionary, not revolutionary. We should use what we know to its fullest and our designs will be more cost effective than making any fixed assumption. We should work on our controls and figure out why many companies can keep inflation or shift to less than 0.5 sigma.
The reality is that most companies doing six sigma are working with poor quality systems and poor controls. Most “breakthrough projects” are not that at all. They are knowledge sharing and control. No need for all of the other mumbo jumbo. Do we need to know statistical and knowledge sharing tools? Of course. Do we need to up the caliber of Leadership? What a dumb question – eh?
Stan: You have made some statements that you will likely
not be able to back up. For example, if you reference
pages 6-7 through 6-16 of Dr. Harry’s book “Six Sigma
Producibility Analysis and Process Characterization” (first
published in 1988 and then again in 1992) you will find
the mathematics that prescribes Cp*, now known as Pp,
as well as Cpk*, now known as Ppk. Dr. Harry
demonstrates that the index Cp assumes a static standard
deviation (S.st), but over time, the standard deviation
inflates (S.lt); thereby mitigating the Cp computation. In
other words, Dr. Harry demonstrated that Cp*=|T-SL| /
(S.st*c), where c is the rate of dynamic inflation. He also
demonstrates that Cpk*=(|T-SL|*(1-k)) / (S.st*c). Stan, if
you dispute this “first useage,” then provide your
references prior to 1988. Please provide for all of us the
source name and page numbers. In terms of the Plan-
Train-Apply-Review (PTAR) cycle of learning, I am sitting
here holding a book entitled “The Vision of Six Sigma: A
Roadmap for Breakthrough” published by Dr. Harry in
1993. On page 25.3 it shows a graphic entitled “The
PTAR Training Model”. The page provides a picture of
the PTAR cycle and says “The PTAR model is executed
for each phase of the Breakthrough Strategy. In this
manner, training is introduced as it is needed and at a
rate it can be institutionalized.” Again Stan, provide a
source and page number for the same useage prior to
this date. In terms of the words “Black Belt” and “Green
Belt,” I am holding a copy of the contract Dr. Harry had
with Unysis corporation in 1987. It has several pages
dedicated to the training of “Black Belts, Brown Belts, and
Green Belts. Also, there is a letter from Cliff Ames (Unysis
executive) that says “I am writing this letter to confirm the
fact that we hired Dr. Mikel Harry in the fourth quarter of
1987 to help with the implementation of a high
performance management system” in the Unisys Salt
Lake Printed Circuit Facility. This program was conceived
and implemented during the time frame of Q4-87 through
Q2-89. During this period of time, the terms “Black Belt,”
“Brown Belt,” and “Green Belt” were introduced to the
facility by Dr. Mikel Harry. As the responsible Plant
Manager, I agreed with these terms and implemented
them to put a label on our statistical superstars.” Stan, I
again ask you to cite a reference and page number for
useage of these terms prior to Q4 1987. You keep saying
these things are not true, I keep citing references, you
never cite anything verifiable (other than your opinion).
Show us the data buddy. Regards, Reigle Stewart
Stan: Wow, you are really back peddling on your position
now. Suddenly, you are saying that an expansion in the
standard deviation can be equated to a linear shift in the
mean (reference your reply to Fernando … it is the
“same”). Several months ago, you argued the opposite.
Seems the closer you move to this debate, the more your
position is “shifting and drifting.” Can’t wait to read your
white paper. Regards, Reigle.
Stan: Where are your references? You make public
statements that Dr. Harry did not do or create many of
these things, so provide us with a source document of
“earlier useage” If what you say is true, that should be
very easy for you do. After all, management by fact is the
Six Sigma way. Besides, you will have to produce such
documents during the debate. Reigle
The measures of long term and short term were introduced by Ford. Go do your homework.
Mikel was an executive at AlliedSignal?
I thought the white papers were supposed to go to the judges?? How come you’re reading them before hand??
Yea Reigle, what’s the deal.
You have judges that are associated with you and Mikel at SSMI. It is happening are SSMI’s partner location. You limit the scope of the debate and white paper to the faulty assumptions of Mikel’s mathematical “proof”.
Read the posts lately, more and more people are on to your game.
Stan – he’s been dodging these questions from the start. He’s setting you guys up. I’m starting to believe his motivation is to try and humiliate you guys (TRY). He’s not interested in a fair debate. He is playing games and we all know it.
Breakthrough is from Dr. Juran’s book Managerial Breakthrough
PTAR is from the adult learning model
Mikel was an executive at AlliedSignal?
Tell Mikel that I think he needs to up the caliber of his cheerleaders as well.
Stan: You say “Ford” but what document at Ford are you
referencing and what is the date of that Document. I keep
asking for documentation, but you don’t produce any. I
am sure you believe what you believe, now show the rest
of us that your beliefs are founded in verifiable
documents. Lets practice some Six Sigma here OK. By
the way, Dr. Harry’s business partner is Mr. Phong Vu
(Phong was the Senior Deployment Champion for Ford
Motor Company when they rolled out Six Sigma). Reigle
Matt: I am only quoting from several source documents
that define “first useage.” These documents are currently
being readied for posting on the internet for all to see
(and for some to read and weep). Several people are
going to feel a little akward when they see these
documents, given they have strongly asserted (without
documentation) a postion to the contrary. Sometimes you
have to give a person enough rope to hang themselves.
I am no friend of Mikel Harry. Just like him, I also was at Motorola working with Bill Smith directly implementing Six Sigma (not promoting myself!) And, I also feel like many of us, I compete with him or others for opportunity.
Having said that, knowing Motorola’s contributions, etc. I still give credit to Mikel Harry and Richard Schroeder, who packaged Six Sigma for reusability (and we all are using it), and commercializing Six Sigma, from which many of us are benefitting.
Sometimes, I have a hard time understanding why we all are trying to beat up one or two people for something good they have done. They may not be perfect, so are we. They may have ego (who cares). So, why bother what Mikel does. We need to ask the question what we cn do to make Six Sigma better, or apply it more cost effectively.
Overall, Mikel has done a great job, and he has made a lot of money. Great for him. Maybe, we can learn some of his ‘good’ traits. It is just my opinion.
Stan: Sounds like you are having second thoughts. All
you have to do is prove that Dr. Harry’s equations are
without merit and are wrong. All you have to do is
produce your references to support your allegations. Of
course, to do this, you must demonstrate what is “right.”
This should be a piece-of-cake for your genius mind and
all-knowing experience. Sure looks like you are starting
to back peddle now … faster and faster. If you read Dr.
Montgomery’s and Dr. Keats credentials, I do believe you
will find they have little bias one way or the other. Sounds
like paranoia to me (or someone starting to run a little
scared). Besides, the papers will be published along with
the transcripts, so any bias wil be in plain view … so I
dobut they will allow such bias. Reigle Stewart
Mr. Vu? Wow, Ford is making money? Ford is #1 in satisfaction? Ford is known for their Six Sigma efforts? The answer to all is no, of course.
Go do your homework Reigle.
I agree with you about the commercialization. But what about taking credit for things that are the work of others.
I believe Mikel should take credit for the good he has done and stop the other nonsense. That and go take a two year assignment at Toyota and go learn about quality.
Stan: I have Juran’s book you refer to. No where, and I
mean no where in the book, does it refer to the “DMAIC”
cycle of breakthrough … Juran just uses the word
“breakthrough” in an improvement context … Plan-Do-Act
stuff, not “DMAIC,” or even “MAIC.” Given me the source
and page number of first useage of “DMAIC” You will find
it was Dr. Harry that first used DMAIC. You say PTAR is
from “Adult learning model” … Given me the source and
page number of first useage of “PTAR.” Of course, you
won’t because you have no such sources. Seems I am
willing to provide sources, but you are not. No, Dr. Harry
was NOT an executive at Allied Signal, Rich Schroder
was. Dr. Harry was Corporate Vice President, Quality
Systems, Asea Brown Boveri … a 500,000 employee
company in Europe. He reported to Sune Karlson
(Executive Vice President and Member of the Board).
Again, your “facts” are in error. Reigle Stewart.
I know Bert and Doug and am a little surprised at the company they keep these days.
Reigle, go read the book. That is where the claim of Mikel being an executive at AlliedSignal is.
Stan: More REHTORIC and ALLEGATIONS … without
facts, documents, or equations. Enjoy hiding behind your
“code name” now. Keep on making false rehtoric and
allegations while you can. Everyone knows you will find
some excuse not to debate. Stan, the world is “getting on
to you.” Your tactics of unsubstantiated rehtoric and
allegations is getting out-of-hand and sounds more
rediculous by the post. You won’t identify yourself, you
won’t give us your “first useage” references, you won’t
give us your math, you only provide a lot of unfounded
opinions. The only person not “playing fair” seems to be
you. There is an old saying: “Let the product do the
talking,” words you will likely never forget after the debate.
By the way, have you ever been responsible for
deploying Six Sigma across a multi-national company …
of course you have not, but you are always on this site
putting out false advice (as if you had first hand corporate
leadership experience). Reigle Stewart
Stan: Funny thing, they don’t know you. Reigle Stewart
Stan. I am done with this bantering. Good Luck at the
Reigle, stop toying with my heart. You are not even close to being done.
Dear Guess Who, oops I mean StatmanToo, oops I mean Reigle,
Yes I have been responsible for such deployments.
Understood. Thanks for clarifying. Has Statman submitted his paper??? He’s been quiet lately. Just curious.
I think what most of these posters know is that Mikel Harry and Six Sigma Academy are affiliated with ASU. Just read the posts. I don’t even want to bring up the discussion of how Mikel Harry got his Ph.D. at ASU; don’t get me started on that. That’s a whole different subject. But the fact is that you guys have previous relationship with ASU and YOU GUYS picked the judges! Another point is that there are many quality professionals who don’t believe in 1.5 Sigma shift whether it is a theoretical “proof” or an empirical proof for many reasons.
It will be very naive to think that the outcome of this debate will change these folks’ opinion. So… why are we wasting everyone’s time with these posts? This is nothing against you but do you honestly believe that if you guys win this debate, you will change our opinions on this subject?
It appears to me that some people have made so much money that now they have the time and money to spend on “debates” like this.
Yes, Juran does not use DMAIC. He talks about two journeys, the first of which is DMAI and the second is C. Uses different names but the concept and the idea of breakthrough is his. He also declared that all improvement is project by project, or did Mikel think of that too when still in diapers?
Stan. Sorry for jumping back in but I could not resist when
I read your post that said “Yes” to the question: Have you
ever been responsible for the deployment of Six Sigma in
a multi-national company. Will you give us the name of
those companies so we can verify that you had the
corporate leadership responsibilty for deployment. All we
need are the company names … the rest can be easily
verified in a very short period of time (like a couple of
phone calls). Please, give us the company names … this
could be a big step in your direction. Reigle Stewart
Kind of sad that it is soooo important to them.
Money does not buy happiness you know?
Told you that you were not done.
I don’t need verification of my credials, especially to a guy whose apparent credientials are worshiper of Mikel.
My advice on here is sound, people can choose to take it or not.
Stan: Wow, I checked again, but I can not find the page
number where Juran uses the term “DMAI,” “MAIC,” or
“DMAIC.” Please, give me the previously mentioned
documents and page numbers and I will shut-up … no
kidding around, give me a reference for “first useage” of
the terms: Black Belt, Green Belt, Brown Belt, Yellow Belt,
PTAR, DMAIC, MAIC, and so on and I will be humbled
and simply go-away forever (once I have verified your
references). Now how is that for an offer to your heart?
Once again you guys are arguing over dumb thing such as whether Juran calls it DMAIC, MAIC, etc. This is all BS! Go read Quality Digest, May 2004 issue. On page 30, Juran gives his opinion on Six Sigma. It’s on Page 30, the second column. “He does not have much hope foe Six Sigma as quality’s savior, he said. Nor does he especially like it, particulary all the hype, …..”.
JD: An affiliation with an institution does not determine
the validity of mathematics or the soundness of one’s
resoning or the validity of one’s references … the math
stands on its own merits as do the references … for
anyone to independently examine. These judges are
exceptionally qualified and world-renowed within the
disciplines of statistics, engineering, DOE, SPC, reliability
engineering, and so on … Dr. Montgomery was a
Shewhart Medal receipent. The debate is not intended to
change anyone’s mind … simply provide the arguments
and let the referees decide. Once posted, you can decide
on your own. Bottom line, Stan is a big boy … he agreed
to prepare a white paper that provides the math to counter
Dr. Harry’s position … he agreed to participate in the
debate … if you are not personally interested in the
debate, then ignore it. No one forced Stan into this
position … he agreed, has made his opinion and
allegations know to the world (as evidenced by his posts
on this website) … now, let him defend them in an
honorable way. The truth will prevail, unless you don’t
want the truth known (for any number of reasons). Reigle
What happens if you calculate a Z score from a bunch of process data across a significant number of runs, raw material batches, etc then I don’t add 1.5 and I call this my Z long term. Then I calculate a Z short term based on a pooled standard deviation on a rational subgrouping of that big mess of long term data. How does this affect the six sigma process. What I am asking is: Do I have to add 1.5 to my Z score in order for the DMAIC process to work? Can I call 3.4 DPMO 4.5 sigma and still have the six sigma process give good business results? The mathematical / statistical validity of the 1.5 sigma shift isn’t that important to me. I just want to get good results.
Thanks in advance,
Told you that you could not stay away.
I never said Juran used DMAIC, I said Mikel just renamed Juran’s journeys.
Hey Juran is in the news a lot for his 100th birthday. Since Mikel knows him on a first name basis, have Mikel ask him what he thinks.
You think the letters are important, you think the “belts” are improtant. They are not. Just a lot of hype.
One of the posters in the last few days (either Fernando or Andy U) noted that none of the belt or DMAIC crap was in place when Motorola was kicking butt and taking names in the late 80′s and has been in decline almost since the instance that Mikel started his claims. I’ll add another “fact” to the mix – Mikel has little to no impact on the improvements or culture of Motorola in this time period. I wonder why?
Look at the tag advertisement just above the forum section: “Joseph Juran, whose ideas led to Six Sigma and other management strategies…..”
Everyone is wrong except the great Dr.
Matt: Dr. Juran has accomplished many great things in
his highly distinguished career, but Six Sigma is not one
of them. When did Juran’s work (books and papers) start
to talk about “Six Sigma?” In fact, Motorola got rid of the
Juran stuff in favor of Six Sigma. Interestingly, the Juran
Institute uses Dr. Harry’s name as an alternate description
(i.e., alt-tags) for their website … wonder why? On some
computers, you can see Dr. Harry’s name appear many
times when the site first opens up. Reigle Stewart
Mattt: Go the Juran Institute site and look under “Who We
Are.” Hold your mouse over the title “Juran Global” or
“Juran Partners” and you will see Dr. Harry’s name
referenced in the alt-tag (but not within the text). Sneaky
huh. I will guarantee you with 120% probability (if that
were possible) that Dr. Harry IS NOT a partner with Dr.
Juran or the Juran Insitute. So what is all of this saying. I
do believe its self-evident … the Juran Institute believes it
can capitalize on the name “Dr. Harry.” Oh well, so it
goes with free enterprise … If you can’t get’em with your
own name, then use someone elses name … even if they
have not asked permission to do so. Wonder why they
did’nt use Stan’s name? Reigle Stewart
This can be not conclusive but…
I have a publication from Ford Motor Argentina, “Control Continuo de Proceso” (Process Continous Control), probably a translation or adaptation from an original version in English by Ford Motor Co. It looks very much like the SPC handbook from Ford, Chrysler, GM, which is probably based mainly in this work from Ford. This is my translation to English from the version in Spanish I have in fron of me now. The bold and underline is mine:
On page 23, under the title “Interpret for Process Capability”, it says:
“4.a. Calculation of the Process Standard Deviation
As the short term process variability is reflected in the subgroup ranges, the estimation of the short term process capability is based on the average range Rbar. Calculate:
sigma hat = Rbar/d2″
Then the caclulation of ZU, ZL and Cpk follow.
No mention is done to the long term variation, Pp and Ppk (which are included in the SPC handbook from AIAG) are not mentioned either.
But I am sure noone will refer to a “short term variability” and “short term process capability” if there were not other things known as “long term variability” and “long term process capability”.
So my conclusion is that the copncepts short term variation, long term variation, short term process capability, long term process capability, and therefore Ppk or an equivalent form as an equivalent of “Cpk for long term process capability” already existed by the time of this publication.
This publication is dated 1983.
Also, in my previous job, we used a PDCA system to make improvement projects in teams. I had a direct participation putting in place this system (I will not say “I was the responsible for..” because I was, but was not the only one) It was a local company (big for a local one, but small compared to the multinationals) and we had never heard of six sigma.
The main phase was “plan”, and it icluded to looking at compny objectives, measuring process performance on those indicators, finding best opportunities for improvement using process flowcharts, pareto, cause-effect, brainstorming, SPC, etc… to find causes and solutions, and making an implementation gannt (looks like DMA).
The next phase (do) was to folow the gannt. and the next phase (check) was to verify that a) the improvement was done, b) that it was due to the changes made and c) that no side harmful effects were pressent (looks like I).
The final stage (act) was to make the changes in all documents as needed (drawings, set-up sheets, control plans, working instructions, etc.), to review the training materials, and to provide training according to the new standards (looks like C).
Put all that together, and it looks like DMAIC, even when the name was PDCA. And I am sure it was not an original invention of that company.
This system was customized from materials about TQM which were pretty old, don’t know the exact date but surely before 1985.
Without a profound knowledge of Six Sigma, my impression is that Six Sigma is an evoultion of TQM, as TQM was an evolution of something else and so on… I think we are in a train which is the continous improvement of the quality improvement methods, which started running more than a century ago. Noone can calim himself as the owner of this train we are all building and running on with participations that can be more significative or less significative, but are there anyway.
I just try to put my 2 cents on this train.
Gabriel: I know you mean well when you say “No
mention is done to the long term variation, Pp and Ppk
(which are included in the SPC handbook from AIAG) are
not mentioned either.” So the bottom line is simple … the
Ford document you have not reference Cp* and Cpk*
(now known as Pp and Ppk). So, Dr. Harry’s first useage
still stands. What year is the AIAG document you have?
Does this document call these things out before Dr.
Harry’s first publication (Producibility Analysis book
1988)? Respectfully, Reigle Stewart.
Matt: One more thing, a simple “HTML Source Code”
report shows that the Juran Institute uses Dr. Harry’s
name as a Meta-Name Keyword. As you know, such
meta-names are used by search engines to find “relevant”
searches. While this is flattering for Dr. Harry, it would
seem to be a questionable use of his name. Again, why
would the Juran Institute stoop to such tactics? Reigle
Stewart. PS: The Juran Institue is not the only “Six Sigma
Consultancy” doing this.
STOP THE PRESSES, CANCEL THE DEBATE. Here is an interesting quote, which in my humble opinion relegates the whole discussion of 1.5 shift into the realm of “who gives a sxxx”. There is sufficient hedging in the few sentences to convince me this whole thread is much to do about nothing.
“…it should fairly evident (sic) that the 1.5 sigma shift factor can often be treated as a “statistical correction,” but only under certain engineering conditions that would generally be considered “typical.” By all means, the shift factor (1.5 sigma) does not constitute a “literal” shift in the mean of a performance distribution – as many quality practitioners and process engineers falsely believe or try to postulate through uniformed (sic) speculation and conjecture. However, its judicious application during the course of designing a system, product, service, event or activity can greatly facilitate the analysis and optimization of “configuration repeatability.” “In summary, the 1.5 sigma shift factor should only be viewed as a mathematical construct of a theoretical nature….However, when typical application circumstances are postulated and rationally evaluated, the resulting shift will prove to be approximately equivalent to 1.5 sigma.”
Oh, yes? And what about these ones?
About the 1.5 sigma shift as a compensation for sampling variation:
“In this context, the 1.5 sigma shift is a statistically based correction for scientifically compensating or otherwise adjusting a postulated model of instantaneous reproducibility for the inevitable consequences associated with random sampling variation. Naturally, such an adjustment (1.5 sigma shift) is only considered and instituted at the opportunity level of a product configuration.”
Abaut the 1.5 sigma shift to convert from long term to short term:
“In this case, we seek to set Z.shift at the convenient and conventional value of 1.5″.
“The reader is admonished to recognize this author’s repeated use of the word “approximation.” In this context, Z.shift is an “engineering approximation,” not a “statistical estimate.” This is a difference that seems to escape many practitioners of Six Sigma.”
All sic by Mikel Harry
To see the backgound, look at the mesages 32225 and 37020 in this forum.
Gabriel: Thanks for making my point … seems that when
you cannot offer math or facts or documents to support
your position on an issue, you then turn to taking
paragraphs fully out of context and skew it to what you
want it to say. Everyone knows this age old parlor trick …
it just makes the perp look even more foolish. So let me
ask you, what does the other 186 pages of the book say?
Wow, for a bunch of people that claim to be “Six Sigma
Professionals,” you really don’t follow the practices you
preach. Come on and get with it Gabriel make your case
… show me your math … not your limited understanding.
The same goes for Darth. Give me the references and
facts wholistically, not in the fragments you want to
present. I won’t let you off that easy. Reigle Stewart, the
Gabriel: Praveen said it best. You really don’t know how
to extend credit to someone that has earned it. Bottom
line, your professional jealousy is peeking through …
everyone else can see it but you. You want Dr. Harry to
be wrong soooo bad, you even comprise your own
integrity (and don’t even know it, but others do). Greed
has the same effect you know. So does jealousy. Save
the words and just show your math that proves Dr. Harry’s
equations are wrong … its that simple … But then we know
you won’t do that either. Reigle Stewart.
I’m sorry, I have to say this. Reigle, Gabriel has contributed endlessly to this site. Free from ego and one-upmanship. He has helped so many folks with practical, useful advice. His isn’t stuck on the 1.5 sigma shift. I’ve yet to see any useful, useable advice from you. All we hear about is “Dr. Harry this” and “Dr. Harry that”. Stop trying to pick a fight with everyone. But I’m sure I’ll be next.
Matt: You make an excellent point about Gabriel and I
will yield to your request and judgement. However, you
must look at it from my shoes as well. For twenty years I
have been with Dr. Harry … I have seen first-hand his
work put into action by so many corporations (that have
benefited so greatly). His contributions are tireless and
continuous. When you witness such things first hand and
then read the crap some say on this website, it does “stick
in your craw.” When you hold documents in your hand
that clearly demonstrate “first useage” (like the terms
Black Belt and so on) and shows the dates in which these
documents were signed, and then hear Stan say things
like “He did’nt invent anything,” it really hurts on a
personal as well as professional level. Stan (and several
others) seem to have a need to attack the character and
contributions of others, thereby bringing shame to our
profession. We should all be willing to acknowlege the
truth when presented evidence (just like in a court of law,
also based on the principles of logic an science). When
challenged, we should have the personal courage and
strengh to step forward and provide our facts. If we do not
have the facts, then we should freely say so and conceed
that our position is one of conjecture, not fact. Otherwise,
we simply make our profession look bad. Reigle Stewart
This thread remains somewhat compelling – much like not really wanting to look at a train wreck but not quite being able to turn away from it.
As much as I love good theoretical and ideological debates [and I normally do] this argument lacks the grounding provided by significance and relevance. You and Stan have each selected a different perspective to perch on and will never join each other on the same perch.
Who said Six Sigma first? Who said Black Belt first? Is there validity to the 1.5 sigma shift? What a waste of your time.
You are both really bright knowledgeable guys (and possibly Dr. Harry also) and you could be conceptualizing and leading the next advance in Six Sigma but you’d rather see if you can pee on each other from your perch. Just a comment from the pee-nut gallery.
Matt: By the way, you are very quick to point out Gabriel’s
contribution to this website … why are you not as quick to
point out Dr. Harry’s contribution to the world. Why do you
not declare Gabriel’s supposed flaws, yet you are quick
to point out Dr. Harry’s. Seems you may not have a
balanced playing field. Respectfully, Reigle Stewart
SSNewby: Excellent point. Dr. Harry has been working
for the last 18 months on developing Generation III Six
Sigma. The focus is on Value Creation and looks at an
organization’s “Velocity of Capacity.” It utilizes the ICRA
approach: Innovation, Configuration, Realization, and
Attenuation. Hey Stan, I’ll be Dr. Harry did not invent any
of this either … huh. The white paper on Gen III and
Velocity of Capacity was written by Dr. Harry some time
ago and is now coming into its own. You will be hearing
a lot about this in the near future. It is currently being
delivered to POSCO … Korea’s largest steel manufacturer.
Thats Dr. Harry’s job … I am doing mine too. Reigle
Do you work for living or are you retired? No insult is intended! I am only asking because of number of your posts, and I must say long ones too.
The forum was much more productive before you and Stan started all this garbage! At least before, it had some value added and people could see what type of problems other folks had and learned from each other. You and Stan turned this to a Ping-Pong match, wasting everyone’s time.
We don’t care about all this crap, please stop it and get back to work. That’s if you guys have work to do.
I honestly don’t give bleep damp about 1.5 sigma shift and history of Six Sigma according to you two.
Where the heck are the isixsigma folks and why don’t they stop all this crap?
My vote is that the iSixSigma folks stop these two people from ruining this discussion forum also. Plus, if they’re not supposed to be promoting products and services, why is one of them doing that for Dr. Harry? It’s a bunch of you know what that this is going on so long. Stop now, please.
Reigle, fair statement. I don’t have anything against Dr. Harry. I don’t doubt his contribution. I just thought the slant against Gabriel was uncalled for.
Matt: Thank you. Reigle Stewart.
You are the one who is not getting off easy.
The mathematical proof, as you call it, is just stating some assumptions and then running a string of stats based on the assumptions. The assumptions are not valid is most cases.
You are going after the wrong person here. Gabriel always gives the best, most complete answers of anyone on here and is clearly here to share knowledge and learn.
You don’t get points for going after honest, sincere people.
It is the stretching of the truth (and beyond) that makes your camp so inviting. Case in point – “I’ve been with Mikel for 20 years” – you did not know Mikel from Stan 20 years ago. Start being factual and maybe I’ll back off.
Mikel’s contribution to the world? What in the heck would that be?
Mikel’s contribution to Mikel? That is a different story.
Just Stop all NONSENSE on this site.
I am damn sure there is NO DEBATE.
But just two people trying to prove mastery in proving themselves in every possible bu_ _ sh_ _ way and pulling down each other.
THIS IS THE DIRTIEST AND UNHEALTHY THREAD AND NEEDS A FULLSTOP.
If Reigle has challenged and if everStan accepts than go for the debate, otherwise focus on other Six Sigma Methodology or tools question.
Hell with 1.5 shift ! and this useless DEBATE.
I have been following this discussion of the past few weeks (as the other threads) and have a couple of questions. However before I throw them out please grant me this, I am NO EXPERT but rather “still the student”.
1) In Dr. Harry & Mr. Schroeder’s book (Six Sigma, The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations) on page 144, second paragraph at the end. They state that the 1.5 sigma shift is a “fudge factor”. With that said (and based upon what I have been taught by GE) I was under the belief that the 1.5 sigma shift is a “rule of thumb”.
If that is the case, can you mathematically prove a “fudge factor”? And, how do you then mathematically disprove the same?
I’ve always thought that “rules of thumb” are those things that you fall back on when things “just don’t make any sense”. And you need to “hit the I BELIEVE BUTTON and move on”. Am I wrong???
2) Six Sigma Generation III. What I have read on a few post on this website, Dr. Harry’s website, and the few articles I have found sound remarkably like that material I have read by Womack, Jones, Rother, Shook, and Dennis. You know, Lean (and that of the Toyota Production System that is called Lean)???
So my second question is: What is the difference?
Or: Is Six Sigma Generation III a ”new” marketing of the integration of Lean with Six Sigma?
As I said, I am not an expert but I am curious. Any thoughts (from anyone)?
I agree. It was a slow week on the forum and I thought I might stir the pot a bit.
My only points are the 1.5 shift is not relevant and there is no proof, only some mathematical manipulations based on some assumptions; the assumptions are not valid in most cases as we don’t need to assume behavior of processes we know; be careful what is said as fact, because they aren’t; be careful of what is being sold because it does not match your experience.
I am out of this thread.
Reigle, here you have some FACTS:
“you then turn to taking paragraphs fully out of context and skew it to what you want it to say”
The context is shortly expressed in a title befor the quotes “About…”, and then you had the reference to the threads in this forum where this quotes were taken from (messages 32225 and 37020). You or anybody who want can go and look.
However, to help refresh your mind, the two last quotes were taken from the pseudo-answers that Mikel Harry gave to my questoins in his forum in this site. The question was the following one:
“Dear Dr Harry: In the iSixSigma forum, Reigle Stewart did a nice job showing the validity of the 1.5 sigma shift in a DPQ context for specific samples, but told me to better ask you the following: 1- How do we get from that to count defects, count opportunities, find how many sigmas away should a normal distribution’s average be from a unilateral specification limit to produce as many defectives per units as defects per opportunity were found, call that “long term sigma level”, and add 1.5 to get the “short term sigma level” despite the sample size, the process distribution, and how it will be controlled? 2- As I understand, the important figure is Zlt. We calculate and report Zst instead due to the long time involved in long term studies. But sometimes we have enough history to calculate Zlt without assumptions about the shift. Why is the sigma shift used to “reverse” the approach and estimate a Zst based on Zlt and an assumed shift, instead of just reporting Zlt? Thanks for your help”
As you know, I asked this question to you three times and I am still waiting an answer. When I asked Mikel in his forum I received a first “answer” that included a lot of assumptions and the too quotes. The question was not really answered. When I post a second question in Mikel’s forum noting that he had not answered my original questions, he gave me a nice answer saying that it was a profound and inteligent question that went beyond the scope of the forum and to go buy his book.
The first quote was from an answer from Statmam to a post from me that said that uncertainty due to random sampling and added variation due to the long term are two distinct things, and that the explanation of the 1.5 for DPQ (under certain restricted conditions of sample size, alpha risk and distribution shape) were absolutely not apllicable to the conversion of Zlt to Zst, which is a completely distinct concept.
“So let me ask you, what does the other 186 pages of the book say?”
I don’t know. Statman quoted Michel’s book, and I quoted Statman’s quote. I don’t have the book. And now I have other economic priorities before buying it (not saying it’s good or bad, just that I have other priorities, ok?)
“Wow, for a bunch of people that claim to be “Six Sigma Professionals,” you really don’t follow the practices you preach”
That’s BS. I never said I am a SIx SIgma Professional or nothing close to that, because I am not and I do not want to pretend to be what I am not. In fact, I several times said that I was not an expert in SS or a belt of any color. And one of my last posts (message 46104) that you red because you answered it I said: “Without a profound knowledge of Six Sigma, my impression is…”
I don’t have a formed opionon about Mikel Harry. I do not know him or his work other than from the posts in this forum. I just didn’t like the answers he gave me (or better, that he didn’t give me) to two stright questions. And that’s what I quoted.
Where did you get the following equation from?
sigma hat = Rbar/d2
For xbar and R chart, I believe A2 * Rbar = 3 * sigma_hat so sigma_hat = A2 * Rbar /3. Could you explain to me where you got that equation from?
P.S. don’t waste your time replying to posts on “theoretical proof” of 1.5 sigma. We all know better. I guess what amazes most of us is how it went from “approximation” to “proof”.
I don’t know where Gabriel got it but I got it off the NIST Engineering Statistics website
I hope that you don’t mind if the rest of us do consider you a very effective and helpful Six Sigma professional. Don’t waste your time yelling into the wind on this. The wind is what it is and little more.
The source where I copied this formula from in this particular case is in the previous post itself: Control Continuo de Proceso, Ford Motor Argentina.
However, it is available in any book that touchs the SPC topic, and also in several web sites as the one posted by faceman.
But I can see where your confusion comes from:
There are two populations, with its own distribution each: The population of individual values and the population of the subgroup averages (well, there are other populations as the ranges, etc… but let’s focus in these two ones by now)
The control limits are allways at ±sigmas, buit sigmas of the distribution of the subgroup statistic you are charting (Xbar, R, median, S, p, u, etc…)
As you said, in an Xbar chart, the upper control limit is Xbarbar+A2*Rbar, where A2*Rbar is an estimation of 3*sigma(Xbar), where sigma(Xbar) is the standard deviation of the distribution of subgroup averages. Said as you said it, A2*Rbar=3*sigmahat(Xbar), where sigmahat(Xbar) is an estimation of sigma(Xbar), and then sigmahat(Xbar)=A2*Rbar/3
Because of the central limit theorem, sigma(Xbar)=sigma(X)/sqrt(n), where sigma(X) is the sandard deviation of the distribution of individuals and n is the subgroup size.
If sigmahat(X)=Rbar/d2 (sigmahat(X) is the estimation of sigma(X)) was true as posted in my previous post, then we would have
sigmahat(Xbar)=sigmahat(X)/sqrt(n)=Rbar/d2/sqrt(n)=A2*Rbar/3 ==> A2=3/d2/sqrt(n). Can this be true? Let’s see:
For subgroups of size 4 my SPC handbook (AIAG) says d2=2.059 and A2=0.729.
3/d2/sqrt(n)=3/2.059/2=0.72851. My God, it works!
The true story, or the same story but in the right order, is: an estimation of the process standard deviation (within subgroups) is sigmahat(X)=Rbar/d2, because of the CLT sigmahat(Xbar)=sigmahat(X)/sqrt(n), the CL’s for Xbar are ±3*sigmahat(Xbar), which is ±3*Rbar/d2/sqrt(n), d2 is tabulated, so no further constant is needed, but to avoid calculations let’s take 3/d2/sqrt(n), call it A2 and tabulate it too.
Whoever you are, thanks for your kind words and your wise advise.
In an attempt to mediate this Harry Feeding Frenzy, might we at least agree on the following:
1. The development and evolution of SS was a collaborative effort of a number of talented people and visionary leaders including but not limited to Dr. Harry.
2. The current popularity and commercialization of SS was due, in no small part, to Dr. Harry and his Associates.
3. Based upon the specific assumptions put forth by Dr. Harry, the 1.5 shift has some mathematical validity. The shift is not an inviolate truth nor has universal application and in no way should be viewed with unquestionable religous zeal.
4. Reigle can be long winded, pretensious and pedantic. It is obvious that he is a fan of Dr. Harry to say the least. But he has answered every arrow shot at him in a courteous and professional manner without resorting to name calling nor hysterics.
5. Stan can be short, cynical and rude. But, he has been a frequent poster and has provided a lot of valuable feedback to others on this Forum.
So, can we agree on the above and move on to addressing issues of wider appeal and enlightenment?
Fantastic! I now see that one is the sigma of the population and the other one is of the subgroups. Thank you!
If you recall we had some discussion on control charts and normality assumption/CLT on this forum a couple weeks ago and we all know now that this is NOT true. Is it correct to see that this is the only time that CLT theorem is being used for control charts?
Once again, thank you!
Well stated. I agree and suggest that your thoughts be taken in the considerate and respectful manner in which they were offered versus being picked at and debated. Let’s move on. We Six Sigmatites [however long our respective involvements have been] have many real hills to climb and we should climb them together.
Methods are used to estimate the ±3 sigma control limits for the subgorup statistic you are charting. For Xbar, an estimation of the process sigma based on Rbar or Sbar is used together with the CLT to convert that process sigma to Xbar sigma.
As far as I can recall now, this is the only use of the CLT is SPC.
Just one note. Your wording “one is the sigma of the population and the other one is of the subgroups” can be confusing, as ”the sigma of the subgroups” could be the sigma of the subgroup averages (Xbar values), or the sigma of the subgroup ranges (R values), or the sigma of any other subgroup statistic.
I did some more investigation on this subject. Back to Normality assumptions for control charts, we agree that this is a myth. I looked at the following link: http://www.itl.nist.gov/div898/handbook/pmc/section3/pmc321.htm and a book on control charts. While you don’t need to assume normality for control charts, for time to detection or ARL and the estimation of sigma, i.e., sigma hat = Rbar/ d2, the individuals must be normally distributed. Thank you!
It is true but (there is always a “but”):
- The normal distribution does not exist in real life for a number of reasons (starting with the fact that all real life distributions are discrete because of the lack of infinite resolution in the data). However, it is true that many times the normal distribution is a good model for the data but only in some range of a few sigmas arround the average. Far in the tails (where the defectives and the false out-of-control points fall) the normal distribution will be hardly a good model, and if it was then it would be very hard to prove because there are so few data points in those zones. So things like the estimation of defectives rates based on Cp/Cpk or the calculation of ARL or time to detect, which is based on the normal distribution, probably will not match very closely what you have for real even if the data pass a normality test (which, by the way, tells you that there is no enough evidence to reject the hypothesis of normality, what is not the same than saying “it is normal”)
- d2 and c4 were developed for the normal distribution. But I was told that Rbar/d2 and Sbar/c4 estimations of sigma are pretty robust to tha lack of normality. In fact, as you know now, A2 is based on Rbar/d2 as estimation of sigma, so if the estimation was not valid the whole control chart would be not valid.
- Bottom line, when doing SPC do not care about normality unless you need to. I have experience using SPC with non-normal data and had no problem related to the lack of normality. (on the other hand, knowing the shape of the individuals distribution and learning why it is not normal can be insightful to understand process)
Glad there is something interesting to converse about again on the board. I agree with all you stated and think I remember the d2 used for estimating sigma are assuming normal distributions by the mathematicians that estimated them by the hand calculations.
One item might be lost in the long, good string. SPC charts on Xbar are much more likely to be valid on a process that may not be “normally” distributed. I say this because as we know a consequence of the central limit theorem says a distribution of means become more normal (layman’s terms) than the individual points. My point is that the Western Electric SPC charts on individuals, e.g. an individual/MR chart, is not going to work for a non-normally distributed process without finding a good subgroup that will allow “the benefit” of an Xbar chart and make the rules more tenable. Yes, an alternative is the process could have a transform applied to the individuals to create a more Gausian curve.
Gabriel, I’m curious from your perspective. Thanks ahead of time for your thoughts.
Have you ever made a normality test on the R, S, p, np, c or u values before plotong a control chart? Of course not. If you had, then the assumption of normality would be violated. And don’t tell me that these charts are based on other distributions such as binomial, poisson etc. All these charts have control limits at ±3 sigmas regardless of the underlying distribution.
Then, why would one care about normality on an individuals chart?
Of course that an averages distribution will be more normal than an individuals distribution. But on the other hand, it is said that a sample sizes of at least 30 are needed to get a pretty normal averages distribution from a skewed individuals distribution. Which is the larger subgroup size you ever used for an Xbar chart?
By default, I do not care about normality in SPC unless I find a practical problem related with the lack of normality when trying to use it (I’m not quite sure if that ever happened to me).
My opinion, the main adventage of Xbar charts (why I use them instead of individuals whenever possible) is the greater power to detect special causes. I take the ”improved” normailty just as a beneficial side effect.
We all know that the Normal Dist. is a pretty good approx. for the Binomial (p chart) within a pretty robust range. If the p-bar happens to be quite small or large, the control limits are no longer symmetrical so interpretation becomes trickier. I assume the same will happen with the np, c and u. How does that factor into the discussion?
It is true. If you follow the rule of n*pbar=5 at least, then the binomial distribution (p and np charts) will look pretty normal. Yet, by overlapping a normal distribution the skew will become evident at first sight. Another indication of the skew is that for n*pbar=5 the “thoretical” LCL is negative, meaning that by the normal distribution you would still have 0.135% of the points below some negative number while in fact you will have 0% below zero.
More or less the same happens with the Poisson distribution (u and c charts).
Also skewed are the R and S charts when the subgroup size is small. I run a simulation in Excel taking random values from a normal distribution with sigma=1 and making 100 subgroups of 2 to 4 to calculate the R and S values and make histograms (placed at the end of this post) to visualize the distribution shape. Again, the skew is evident and greater for the smaller subgroups. Not by chance the theoretical LCL are negative for subgroups uf op to 6 for R and 5 for S. Again, this means that while by the normal distribution you would still hbave 0.135% of the R or S values below some negative number, in fact you will have 0% of the R and S values below zero.
I find specially intersting the case of the IX-MR chart, that is the “most questionable” for non-normality issues. Why would one care too much about the normality of the individual values when in the same piece of sheet you are plotting a so awfully not normal distribtuion, which is the moving ranges (a range chart with subgroup size 2), that is so skewed that you will hardly get to see an individuals distribution more skewed than that, and without taking special precautions for the lack of normality?
Two final comments:
- Any (as far as I can imagine) subgroup or sample statistic will be more and more normal as the sample or subgroup size increase. The CLT explains this for averages, but it holds true for averages, medians, ranges, standard deviations, defectives count, defects count, among others.
- I want to make clear that I am not against the normal distribution. I love it! The only thing I want to say is don’t freeze if you find the thing you are charting is not normal. As long as you don’t find problems related with the lack of normality, just go stight ahead with the SPC without any speical considerations. I’ve done it and woked Ok so far.
R for n=2 0.0 17 X X X X X X X X X X X X X X X X X 0.5 24 X X X X X X X X X X X X X X X X X X X X X X X X 1.0 21 X X X X X X X X X X X X X X X X X X X X X 1.5 16 X X X X X X X X X X X X X X X X 2.0 14 X X X X X X X X X X X X X X 2.5 5 X X X X X 3.0 1 X 3.5 1 X 4.0 1 X 4.5 0
S for n=2 0.0 15 X X X X X X X X X X X X X X X 0.3 22 X X X X X X X X X X X X X X X X X X X X X X 0.6 18 X X X X X X X X X X X X X X X X X X 0.9 17 X X X X X X X X X X X X X X X X X 1.2 14 X X X X X X X X X X X X X X 1.5 8 X X X X X X X X 1.8 3 X X X 2.1 0 2.4 2 X X 2.7 1 X R for n=3 0.0 0 0.5 16 X X X X X X X X X X X X X X X X 1.0 18 X X X X X X X X X X X X X X X X X X 1.5 25 X X X X X X X X X X X X X X X X X X X X X X X X X2.0 19 X X X X X X X X X X X X X X X X X X X 2.5 10 X X X X X X X X X X 3.0 7 X X X X X X X 3.5 2 X X 4.0 2 X X 4.5 1 X S for n=3 0.0 0 0.3 20 X X X X X X X X X X X X X X X X X X X X 0.6 24 X X X X X X X X X X X X X X X X X X X X X X X X 0.9 22 X X X X X X X X X X X X X X X X X X X X X X 1.2 18 X X X X X X X X X X X X X X X X X X 1.5 11 X X X X X X X X X X X 1.8 3 X X X 2.1 1 X 2.4 1 X 2.7 0 R for n=4 0.0 0 0.5 5 X X X X X 1.0 14 X X X X X X X X X X X X X X 1.5 28 X X X X X X X X X X X X X X X X X X X X X X X X X X X X2.0 24 X X X X X X X X X X X X X X X X X X X X X X X X 2.5 9 X X X X X X X X X 3.0 10 X X X X X X X X X X 3.5 4 X X X X 4.0 2 X X 4.5 3 X X X 5.0 0 5.5 1 X S for n=4 0.0 0 0.2 4 X X X X 0.4 14 X X X X X X X X X X X X X X 0.6 22 X X X X X X X X X X X X X X X X X X X X X X 0.8 21 X X X X X X X X X X X X X X X X X X X X X 1.0 12 X X X X X X X X X X X X 1.2 10 X X X X X X X X X X 1.4 7 X X X X X X X 1.6 3 X X X 1.8 3 X X X 2.0 2 X X 2.2 2 X X
The forum ‘General’ is closed to new topics and replies.