Imagine the scenario.

Herman and Gary both manage processes for your company. Herman’s process involves a simple matching of supplier invoices to purchase orders. Gary’s process is much more complex. This process involves compliance to government regulations and requires massive amounts of data to be taken, accurately summarized and reported in several different formats to local, state and federal agencies.

Your company is engaged in improvement activities and keeps data on reprocessed work (defects). Herman’s group finds very little internal defects, but the folks responsible for accounts payable find about 5 percent of the work is in error, causing delays in payment to the suppliers. Gary’s group finds about 20 percent of their work needs to be reprocessed, causing overtime to be worked to make sure reports are always filed on time with the appropriate agencies. Management reviews the results monthly and focuses on yield and labor variance.

What does management say to Herman? What does management say to Gary? In most companies Gary would be asked why he couldn’t get his department to perform like Herman’s. What are the implications of this type of behavior? The politically savvy types, like Herman, migrate to the simple processes. However, when we compare the work done by both groups, we find that Gary’s group is ten times more error free! Has this company missed a great opportunity to learn?

This is the nature of opportunity counting and related metrics like DPMO and sigma levels. They are tools for management, not tools for the change agents and change teams. Change agents and teams only need to know current performance in the absolute simplest of terms and if the performance is improving. Management needs to know where the best processes are and where the worst processes are. We flag the best so that everyone can learn from them (rule #1 of Six Sigma – steal shamelessly all non-proprietary and non-copyrighted ideas). We flag the worst so that management will take a hard look at the issues and improve them.

Some caveats for opportunity counting (the first three could actually be caveats for policy deployment):

  1. Whatever method is chosen, it must not conflict with other company initiatives.
  2. Whatever method is chosen, it must be simple and uniform throughout the organization.
  3. Companies pursuing defect reduction are also pursuing time reduction (responsiveness is the real competitive weapon, defects just get in the way of that).
  4. ‘Complexity’ is a measure of how complicated a particular product or service is. It is doubtful that we will ever be able to quantify complexity in an exacting manner.
  5. If we assume that all characteristics are independent and mutually exclusive, we may say that ‘complexity’ can be reasonably estimated by a simple count. This count is referred to as an opportunity count.
  6. In terms of quality, each product manufacturing or service delivery step represents a unique ‘opportunity’ to either add or subtract value.
  7. Remember, we only need to count opportunities if we want to estimate a sigma level for comparisons of goods and services that are not necessarily the same.

That said, how do we get good guidance on how to do this? After all, the sigma metric in Six Sigma is based on an accurate counting of opportunities. If there are to be any rational discussions and comparisons of your organization’s capabilities, this must be seriously considered.

Let’s turn to that touchstone of clarity, the Holy Grail for all Six Sigma professionals, Mikel Harry’s The Vision of Six Sigma: A Roadmap for Breakthrough. Chapter 12, Nature of Opportunities, discusses opportunities. While not exactly straightforward, there is a discussion of ‘active’ vs. ‘passive’ opportunities, where the conclusion on page 12.16 is that if we do not measure (test or inspect) the opportunity, it is not really an opportunity. So opportunities must be the things we test or inspect. Right?

Well, Motorola really touts the ‘bandit’ pager line in Boynton Beach, Florida, as an example of Six Sigma thinking, and one of the strongest points they make is that the product receives no test or inspection prior to being shipped to the customer. But by Mikel Harry’s definition, no test, no inspection, no active opportunities, sigma level does not exist, therefore this process cannot be compared or learned from.

Anybody seeing anything wrong with this picture? This definition cannot be correct, but what is? Post your thoughts on the iSixSigma discussion forum.

About the Author