iSixSigma

Reports of Our Demise

Okay, okay. I know this has already been covered to death in other blogs and various discussion forums. But I am nonetheless compelled to offer my own take on the Wall Street Journal’s article concerning the departure of Home Depot CEO Robert Nardelli. And more specifically on the comments within that article suggesting that this is a substantialexample of Six Sigma “not panning out as promised.”

The article prominently cites a study conducted by QualPro Inc. Before I get started, I’ll freely admit I haven’t read the study. I couldn’t find it on their website, or anywhere else online. (If anyone knows where to get it, please let me know.) So, I’ll have to rely on the words of the article’s author to describe it:

“Now QualPro Inc., a company that markets a competing process-management technique, has issued a study comparing the stock performance of companies that adopted Six Sigma with the performance of the Standard & Poor’s 500-stock index. QualPro has done work for Lowe’s Cos., Home Depot’s main competitor.

“Given that the study was issued by a Six Sigma competitor, it isn’t surprising that the comparisons aren’t flattering.”

They go on:

“A number of former GE executives — including W. James McNerney Jr., former CEO of 3M Co.; Dave Cote, CEO of Honeywell International Inc.; and Mr. Nardelli — helped spread the Six Sigma word but have seen their companies’ stock prices lag.

“Since announcing the adoption of Six Sigma on July 1, 2001, Home Depot shares are down 8.3% compared with a 16% rise in the S&P 500 over the same period. The stock rose more than 2% yesterday on the New York Stock Exchange, to $41.07, after Mr. Nardelli’s resignation.

“Honeywell shares are down 7.2% since its Six Sigma announcement in early January 2000, compared with a 3.6% fall in the S&P 500. Shares of 3M are off about 1% since late December 2003 versus the S&P 500’s 29% climb. GE shares rose sharply in the 1990s, but they’re down 16% since July 2000, when the company adopted Six Sigma, compared with the 2.6% fall in the S&P 500.”

First of all, I know I speak for most of us when I howl: correlation does not necessarily indicate causation. There’s enough material for several blog entries here, but I’ll restrain myself because there are more interesting things to quibble with here. For example, what to make of this?

“Of the 58 companies reviewed in the QualPro report, 52 underperformed the S&P 500 index from the time they launched their Six Sigma programs through Dec. 5, 2006. Other underperformers include Lockheed Martin Corp., Ford Motor Co. and Xerox Corp.”

Yet the George group claims on their website that “our client index has tripled in value while the S&P 500 has declined.” Further, on the back cover of this month’s iSixSigma magazine you can see their data indicating that “George Group clients outperform all major indices.” Presumably a large portion of those clients deployed Six Sigma. So what does it all mean? Is Six Sigma a good thing or a bad thing for stock price? George Group and QualPro both cite Xerox as an example – what are we to take from that? The point is, it’s impossible to tell. I’ve no doubt that QualPro and George Group both used accurate data analyzed correctly. And yet they report diametrically opposite conclusions. Clearly what was measured and how it was measured must be different, but none of that nuance is communicated. Again, there are several blogs’ worth of questions to ponder here.

Handpicked Content:   Walk On The W.O.W. Side "With Value"

More importantly, neither study asks the more interesting question, which is: what would the stock price of these companies have done over the period of the study if they had not deployed Six Sigma? And that’s a question that we can’t answer, because it requires an experiment that’s impossible to run. No one understands Y=f(x) for stock price (at least, no one worth less than 10 figures), so talking about single data points collected on a single x for a particular company doesn’t carry much weight. And given the massive differences company-to-company, I’m not sure aggregating 50 different companies together in a single study is any better. It just sounds better to the casual reader.

While I’m on the subject of QualPro, I found some of the statements on their website, um, interesting. For example, they say:

“Most of us were taught that the optimum process for gaining knowledge is to test one-thing-at-a-time and hold everything else constant. It is called the scientific method. MVT® (Multivariable Testing) proves that the scientific method doesn’t really work in the real world.”

I feel compelled to point out that the scientific method has been around for about 2900 years. Has QualPro suddenly discovered something that minor logicians like Aristotle, Descartes, and Gödel missed? Can QualPro seriously believe what they are saying? It’s ridiculous. In a sense they win by default, because I can’t even formulate a cogent argument to the contrary. It’s like trying to box with a swamp vapor – you can’t hit a bad smell.

And don’t even get me started on marketing multi-variate testing as “better” than Six Sigma. What decent program doesn’t include muti-variate where appropriate? As for equating OFAT with the scientific method, I’m personally insulted by the comparison. What is it about the induction/deduction cycle that precludes testing more than one variable at a time? Nothing. The scientific method says nothing about how to go from induction to deduction. DOE or “MVT” is a great way to go. Fine, promote MVT, but why denigrate the scientific method to do it? MVT and the scientific method are perfectly compatible.

Handpicked Content:   Day One: 8th Annual Six Sigma Summit

Finally, just to continue being picky, despite QualPro’s claims to the contrary you can learn about interactions via OFAT experimentation. If you don’t believe me, believe Daniel:

Factorial One-Factor-at-a-Time Experiments
Cuthbert Daniel
The American Statistician, Vol. 48, No. 2 (May, 1994), pp. 132-135
doi:10.2307/2684266

In the end though, I guess none of this matters because:

“QualPro’s proprietary Multivariable Testing (MVT®) system uses more complex mathematics than is used in a Polaris Missile.”

Boy, I can’t think of a better reason to adopt a continuous improvement methodology than that.

Comments 16

  1. Kevin

    CNN also referenced the QualPro study about six months ago when discussing "changes" to Welch’s playbook. I think you might have commented on it then to. Here was our take:

    Evolving Excellence post

    Kevin

  2. Andrew Downard

    Kevin,

    Thanks for your comment, and the link. I’m refraining from commenting much on the QualPro study itself since, as I said, I haven’t read it. Plus many other smart people (like you) have already done so.

    Andrew.

  3. Sean

    QualPro decries the use of One Factor experiments, yet they judge a company’s stock price performance based on the variable Six Sigma = True or False. Can’t that variable be broken down more finely and separated into a few more pieces?

    Perhaps they need more practice with Multi-Variate Testing.

  4. Mike Carnell

    When I heard Prince was the halftimee feature at this years Superbowl I was emotionally moved. To paraphase Lewis Black "when I think of football – I think of Prince." That is pretty much the issue with the WSJ and their involvement in business improvement – "I know when I am looking for an authority on business improvement I think WSJ." Karen Richardson chose to lay her integrity up on the alter to create some controversy around a rag that is the business worlds version of the National Enquirer.

    You need to consider how many people have been charged with insider trading because they have a copy of the WSJ. It is a low level of information that is primarily used by middle managers to create the image that they have some business acumen. I am sure that if that article was visible in the bottom of a bird cage there are birds all over the world who have refused to deficate until there was some thing lining their cage worth deficating on.

    That was cathartic.

    Your position on OFAT makes no sense. Justifying it because it was used by someone who was as technologically advanced as the early Greeks is pretty weak. OFAT has a use. SS uses things such as ANOVA and Multiple regression when we want to evaluate multiple factors. Why turn it into some stupid discussion about what can and cannot be done with OFAT?

    Has anyone really looked at the level of mathmatics used in a Polaris Missle? How is difficulty measured? Maybe the comment doesn’t mean anything.

  5. Andrew Downard

    Mike,

    The point I was trying to make was not that OFAT is bad (or good). The utility of OFAT, like anything else, depends on context and desired outcome. That’s why I included the link to the Daniel article, which describes a very elegant and useful approach to OFAT. The Daniel method also happens to be a way to estimate interactions using OFAT, which QualPro suggests can’t be done.

    What I took exception to was the suggestion that "scientific method" is synonymous with "OFAT". I don’t believe they are symonymous, the QualPro website does. Confounding the terms annoys me. There is nothing within the scientific method that precludes the use of mutivariate testing, or OFAT, or any other method of testing hypotheses. Even if QualPro has a problem with OFAT, that deosn’t mean the scientific method is incorrect. That’s my point.

    Likewise, I wasn’t trying to justify OFAT "because it was used by someone who was as technologically advanced as the early Greeks." But I do think that fact that the scientific method (as a model for learning) has survived since that time is good evidence for robustness of the model. As Box says, "all models are wrong, some are useful". My suggestion is that scientific method has long proven its utility, and nothing QualPro has to say convinces me otherwise.

    And of course SS uses all sorts of tools to evaluate multiple fatcors at once. This is exactly what I was trying to point out: selling MVT® as something substantially more than SS is bogus.

    Andrew

  6. Andrew Downard

    Mike,

    I’m not trying to fight with you, or anyone, about the utility or applicability of a statistical tool. If you read through my previous postings, I hope it comes through that my approach is more holistic than that.

    On the other hand, QualPro’s suggestion that you can’t learn about interactions via OFAT is factually incorrect. That doesn’t make OFAT more/less applicable, or a good/bad tool. As I said before, the usefulness of a particular tool depends entirely on context and desired outcome.

    I don’t want to sound like a broken record, but my real isssue was conflation of OFAT and scientific method. My reading of the QalPro website was that:

    A: OFAT=scientific method
    B: OFAT=bad
    Therefore: scientific method=bad

    I think you and I agree that B false (since no statistical tool is inherently good or bad on its own), and I further contend that A is false. Since I believe both premisies are false, I believe the conclusion is also false.

    Regarding the WSJ article itself, like it or not, it’s out there. Lots of people have read it. I know, since many of them emailed me about it. I don’t think WSJ is reporting anything that is untrue, or that wasn’t actually said by someone who considers themself an expert. So my issue isn’t with WSJ or reporter. But there is additional nuance and context to be considered, and the quotations should not go unchallenged. And isn’t that what blogs are for these days?

    Andrew.

  7. Mike Carnell

    This scenario goes on virtually daily in the Discussion Forum. There is no up side to a fight over the applicability of a statistical rule/therory/tool. People quote other people, throw out references, post links (except here because they are to long), etc. and in the end you get two things: 1. nobody changes their opinion 2. it overplays the importance of the statistical tools in the improvement process.

    Depending on which consultant you use you get varying degrees of emphasis on DOE. How important it is in an application depends on the processes that are being used. In general we are stretching to see DOE’s as essential in about 10% of the projects and frequently in and assembly type process none at at all.

    QualPro position is a statisticians approach – lots of complex methodology regardless of the problem – one size fits all. The beauty of SS, depending on how it is taught, is that at any point in the process you can drop out and go to Control and tie off the project.

    The real issue that comes up here is that a report has done a half ass job in putting a story together for the sake of a little sensationalism and some controversy to sell a few newspapers. Fast Company and their Debunking Squad was equally negligent.

    If you go back to 95 when we were delivering Allied. USA today interviewed Bossidy about SS and then used some dribble from a technician as an opposing opinion. Basically the journalism sucks.

    Just my opinion.

  8. Mike Carnell

    Ron,

    I love the point about the google search engine and using Qu*lPro.

    I am guessing that the Qu*lPro people aren’t laughing all the way to the bank. When you have such a loose grip on reality that you make the Polaris Missle comment there is a high probability they don’t have a sense of humor – that is a pretty geeky comment.

    Nice Blog. Thanks.

  9. Mike Carnell

    You may not have an issue with the WSJ but I do. When you go to someone who sells a competing product, get comments and don’t have the integrity to find someone with a working knowledge of SS to address the comment then you haven’t done your job with integrity.

    Back to the OFAT. I can make a case to use a hammer to put in screws based on the amount of time it takes to put them in. The hammer will do the job. That doesn’t mean it is the right tool let alone the most efficent tool. OFAT = One Factor At A Time. If I want to study interactions it takes two factors or more. There are tools that study two factors or more. When I am working a problem, my customer, wants an efficent methodology and resolution. I use the tools that is made to do interactions to gain that efficency. If I have an issue selling the result because I chose to make a point that there was a way to use OFAT to get to interactions I could easily get myself into a debate with the geek in the corner that doesn’t buy that. For what? There is no advantage to a customer and it is an esoteric issue that they don’t care about. Clean, efficent, results and minimum controversy drives customer satisfaction not statistical prowess.

    Just my opinion.

  10. Andrew Downard

    Mike,

    No one is talking about using a hammer on a screw. I’ll repeat what seems to be my mantra: the utility of any given tool depends on context and desired outcome. Of course I wouldn’t use OFAT to get at interactions if there was a better way. And usually there are better ways. On the other hand, sometimes concerns like safety or economics mean we are reluctant to change multiple factors at a time. In those situations – good news! – we can still get at interactions. That’s my only point. QualPro says you CAN’T, Daniel shows you CAN. I’m not going to wade in to whether you SHOULD, which is separate question entirely.

    Andrew.

  11. systhinc

    I have saved 120MM USD a year with ANOVA (which took 6months of data collection and analysis) and 240k USD with a five minute observation of a workplace. Whcih is better? Who the hell knows? It’s all about learning as many tools, techniques, and PEOPLE that you can. The great evil is the stock market itself and it’s false metrics of "Greatness" All market data is snapshot data, except, maybe the George Group’s "Value Mountain" which measures Book to Street Value, Economic Value Added, and CAGR to see where reality lies in market performance.

  12. Ron

    Wow! Lots of emotions on this topic. I like it!

    My take on this whole thing is that the Qu*lpro executives are laughing all the way to the bank.

    They want people arguing about this and using their name in vain or glory… they don’t care which. They are working on brand awareness and every time we type their name the more opportunity a Google crawler has to find it thus increasing their webiste rank! OK, so maybe this is a stretch.

    Anyhow, and please don’t hang me for this, I am interested in what Qu*lpro is preaching. It sounds interesting… and if I can add more tools to my tool kit I will be better for it. Does it mean I will stop using Six Sigma and Lean – of course not.

    BTW – I keep stating Qu*lpro since they will get no free Google support from me! Ha!

    Smile everyone… life is good even if you are a Buckeye fan like me.

  13. aint that easy

    Regarding the Daniel article: he was merely pointing out that a simple factorial experiment can be run in an order that only requires one factor be changed between each run. When you’re finished, you still have a full factorial. Analysis of a full factorial experiment is not, in any commonly accepted interpretation, "one-factor-at-a-time" experimentation.

    The Science Fair Guidelines my son brought home from school better represent the commonly held understanding of "OFAT", which is, "a test condition’s optimal setting can only be determined when all other conditions are held constant." That approach will never *systematically* yield information on interactions. You *might* still find the optimum, but it will be luck that gets your there.

    I’ll leave it to the readers to judge which definition of OFAT better represents what they have encountered.

  14. Andrew Downard

    Point well taken.

    I do take excpetion to your turn of phrase though. You say Daniel "was merely pointing out that a simple factorial experiment can be run in an order that only requires one factor be changed in each run", but I don’t think there is anything "mere" about it. I’ve never run into anyone that knew this or used the method without reading Daniel’s article first. Like many simple techniques, it’s only obvious to most people after it has been explained.

    IMO, the elegance in this method is that the execution is OFAT (okay, okay – OFAT in the most literal sense only, not in the science fair guidelines sense), but the analysis is factorial. This isn’t magic, of course: the price is extra runs, among other things. But if something is making you afraid (literally or figuratively) of turning mutplie knobs at once in your process, this is a great way to avoid it while still retaining the power of DOE.

    Thanks for the response.

    Andrew.

  15. aint that easy

    Andrew:

    I didn’t intend in any way to denigrate the Daniel article, or any of his long list of contributions. I wanted to make sure that those unfamiliar with the article understood a bit more about it.

  16. Acai Optimum

    Wonderful post… Very informational and educational as usual!

Leave a Reply