“There are lies, damn lies and statistics,” we have all been warned since the 19th century. The warning is apt these days because of questions about the use of statistics that have been on exhibit in recent months. The principal and founder of a performance improvement consulting firm professes that businesses using Six Sigma are underperforming their peers in terms of stock price.
This article is in response to material published in a number of business media earlier this year, including The Wall Street Journal in January. Related commentary can be found in a blog by Michael Marx of the iSixSigma Blogosphere.
Charles Holland of QualPro claims that among 58 companies that have announced broad use of Six Sigma, the stock performance of 91 percent trailed the S&P 500 index since the launch date of their implementations. Regrettably, some otherwise savvy business publications have been seduced by this data and quoted the conclusions.
QualPro offers businesses a design of experiments system that the firm’s website claims is superior to Six Sigma. The system, called multivariable testing, “uses more complex mathematics than is used in a Polaris Missile,” according to the website. Nowhere on the site is the obvious stated: That design of experiments is one of the many tools of Six Sigma.
Six Sigma achieves only 10 percent of potential process improvement, the site asserts, while multivariable testing is credited for capturing a dazzling “90 percent of process improvement missed by most Six Sigma efforts.” How can missed potential be measured? The site is silent on this question.
Let us examine the logic QualPro uses in its show trial of Six Sigma.
The company uses a research design that equates correlation with its distant cousin – causation. It examines two variables – a company’s use of Six Sigma and its stock performance – and concludes there is causation when it has shown only correlation. Nowhere to be seen are the many different factors that affect stock price movement. Where was multivariable testing in this experiment?
Holland demonstrates a woeful misunderstanding of the process in an article in the January-February 2007 issue of Chief Executive magazine. “Six Sigma is merely an overpriced set of low-level statistics courses,” he says. “While Six Sigma may detect trouble, it does not lead to a redesigned, more effective process. The best that can be expected is slow, incremental improvement – and most companies do not reach this stage.”
QualPro’s website is correct about Six Sigma in one of its statements. “With the organizational pressure on managers to do Six Sigma comes a tendency to do something and activity often is expended on projects with no potential,” it says. This cannot be denied.
Six Sigma sometimes is deployed for projects that provide little payoff. Moreover, standing alone the methodology cannot provide the level of control an organization’s leaders need to ensure that everyone in the business is doing the right things right and at the right time – what we call the “3Rs of Business.” Even when combined with Lean, Six Sigma does not provide the integrated excellence at all operational and corporate levels that should be required and can be measured with bottom-line predictability.
But the contributions Six Sigma is making to businesses cannot be dismissed so cavalierly. The methodology has saved Fortune 500 companies more than $400 billion since 1987, according to research by Michael Marx of iSixSigma Research. DuPont recently credited Six Sigma for $3 billion in savings. On QualPro’s website, in stark comparison, DuPont credits multivariable testing for improvements worth $23 million.
The correlation-equals-causation hypothesis Holland has relied on can be used to reach any wished-for conclusion. Ron Marchelletta, an advisory engineer who is a Lean Six Sigma Black Belt, demonstrates this with an exercise that uses QualPro’s own data.
Marchelletta examined the stock performance of 12 companies identified as QualPro clients on the company’s website or in and article in The Wall Street Journal that quoted Holland. He then looked at the stock performance of 12 companies listed as Six Sigma users from the same sources. He improved on Holland’s research design, which fails to account for market swings, by establishing a common date as a baseline to compare the stock performance of these companies. Making a solid choice, he selected Sept. 17, 2001, the lowest point of the stock market.
The result? The mean increase in stock price for the Six Sigma users was just under 95 percent, while it was only 84 percent for the multivariable testing companies. Does this demonstrate that Six Sigma is superior to multivariable testing? No, because correlation does not prove causation. Though Holland may believe it does, we doubt that he will cite this exercise in his next polemic about Six Sigma.