We all want to get it right the first time. And a brief browse through the business section of any major bookstore, or even various sections of this website, will turn up hundreds of best practices and other bits of advice that promise to help us get it right the first time. Consultants would never get the job unless the promised they could help us get it right the first time. Who would hire an expert that gruffly suggested we all muddle through it together, and promised only that we’d learn from our mistakes and get there somehow? There is tremendous desire and pressure to get it right, right away. Even though this almost never happens in real life, there is a negative stigma associated with failing to cite it as our target.
“It” can be just about anything in business, but let’s stick to Six Sigma deployment. We all want to have a perfect program, preferably right out of the gate. At the very least we want one that is better than that of our (internal and external) competitors. And we certainly don’t want to make the same mistakes that others have made. We want to benchmark, learn from the experts, and use established best practices to ensure that our deployment is smooth and successful right from the start. Right?
Well, maybe not.
Every single Six Sigma deployment success story I’ve ever heard and believe to be true involves a significant period of fumbling. Many of them have full-blown episodes of tremendous counter-productivity. The most entertaining involve heated disagreements and intrigue worthy of a king’s court. Programs that thrash and flail like a drowning man are not at all uncommon. Idiocy is not unheard of. But nonetheless, these truly are success stories in every sense of the phrase. These are programs that are strong and productive today at companies we have all heard of, big and small.
While it’s tempting to believe that these programs are successful in spite of the trials and tribulations they have gone through, I am increasingly of the opinion that they are successful because of those trials and tribulations.
Best practices are informative, and we should learn from them where we can. But by definition, they are solutions for problems encountered elsewhere. If company A runs into a problem or opportunity and finds a way to profit from it, it does not immediately follow that company B can apply the same solution and be successful with it. In my opinion this is widely misunderstood: the best practice is not what company A did, it is how company A decided what to do. Put another way, the best practice should not be the solution itself, but rather the method of developing the solution. Not the outcome, but the structure of the struggle.
The struggling, in my opinion, is unavoidable. As an analogy, a central tenet of change management is that resistance positive insofar as it is a sign that change is occurring. The longer you avoid resistance the longer you put off true change. The sooner you embrace it an deal with it, the sooner you can move on to a new state. I think the same thing is true of many of the struggles encountered in even the best planned Six Sigma deployments. It is precisely those struggles that catalyze the necessary learning by the organization. I’m not quite ready to suggest that we should intentionally create difficulties, but maybe that’s just my own timidity. Maybe touching the fire earlier and more often in a deployment would get us to the desired end faster. At the very least, I no longer believe we should put a lot of energy into avoiding the messy bits – it never works in the long run anyway, and only delays the inevitable.
As an example, take project selection. Every deployment I’ve been associated with has had significant angst around what projects to work on, how to select them, who works on them, etc, etc. These are classic deployment questions. And despite reams of advice and best practices galore, I still find the most efficient way to find the “right” answer for a site/plant/business/division/group/company is simply to pick some method and give it a try. No matter what the first iteration is, intelligent people disagree about its efficacy and execution, sometimes loudly. So discussions are had, flowcharts altered, criteria changed, and a new iteration develops. And this repeats until a workable solution develops. Is it messy? Yes. Is it painful? Yep. Does it work? Every time.
I don’t want to construe these observations as license to deploy without careful thought and planning. Without significant vigilance and strong guidance the mess stays a mess. But I do think many deployment leaders (me included) tend to shoot for perfection instead of simply reaching out to touch the fire now and then. Minor burns heal quickly, after all.