A large corporation recently conducted a competition to identify the organization’s best Six Sigma projects of the previous year. Out of more than a hundred submissions, only one actually validated its improvements with a comparative experiment (z, t, Chi-sq, etc.). What did the rest do? The same thing they did before Six Sigma. The mean went in the right direction so the company was happy (mostly because it saved a lot of money). While this approach may be increasingly common, it is not how Six Sigma is supposed to work.

Originally, Six Sigma projects were about validating process improvements and savings with sound statistical methods. This approach was at the heart of what made Six Sigma different and powerful. Organizations desiring a truly successful continuous improvement program need to renew the original emphasis on data-based decision making within the Six Sigma methodology.

Diagnosing Current Approach to Decision Making

One of the most important contributions of Six Sigma initiatives is the use of validated data to make statistically based decisions to both promote and validate change. The term “validated” refers here to the use of known good data; collected and analyzed to support and confirm decisions made by a project team. The extent to which organizations actually utilize data-based decision making within Six Sigma initiatives varies, and can be determined by answering the following questions:

1. Measurement Validation (Measure Phase of DMAIC)
a. Do we know how to validate attribute data for counts and dispositions?
b. Do we know how to validate continuous data?
c. Do we know when time studies and paired t-tests are applied to validate data?

2. Comparative Experiments (Analysis Phase of DMAIC)
a. Do we use comparative experiments to stratify data and obtain clues to potential root causes?
b. Do we use relational statistics to quantify and model the impact of root causes on the “main pain”?

3. Research Methods (Improve Phase of DMAIC)
a. Do we understand how to leverage research methods?
b. Do we know how to leverage sample size, randomization, and control groups?

4. Statistical Validation (Improve and Control Phases of DMAIC)
a. Do we use comparative experiments to validate improvements in both central tendency and variability?
b. Do we use comparative experiments to validate improvements in savings and process capability?

5. Sustainability (Control Phase of DMAIC)
a. Do we use statistics to prove short-term sustainability?
b. Do we use statistics to prove long-term sustainability?

Too many Six Sigma teams respond “no” to most of these questions. If an organization finds that true in the ranks of its practitioners, perhaps its Six Sigma initiative could benefit from an increased understanding of, and reliance on, data-based decision making. But first, one must consider the alternatives to data-based approaches to decision making, and why they are not an adequate basis for quality improvements.

Other Decision-Making Methods

Many decision-making typologies have been proposed, but most include the following basic categories: traditional, authoritative, intuitive and scientific.

Traditional decision making is doing things based on the way things have always been done. Tradition is codified in procedures, standards, regulations and doctrine. Someone at one time or another found a “one best way” to conduct business, solve issues or resolve problems. Tradition becomes unconscious; it is easier to follow established rules than question them. This is a natural coping mechanism for lowering stress and making efficient use of organizational resources. Sometimes traditional processes and codified knowledge are necessary and purposeful; however, many are not. These are the types of situations which Six Sigma initiatives should be used to challenge, rethink, redesign and improve.

According to organizational theory, as noted by J.R.P. French and B. Raven in The Bases of Social Power in the book Studies in Social Power, power in organizations is derived from three sources – authority, charisma and knowledge. Organizational decisions are often made based on charisma, the individual’s magnetism, persuasiveness and debating abilities. Expert knowledge decisions hale from two sources, both knowledge experts and on-the-job experts. These are the folks who know how to get work done within the existing organizational culture and structure. They include firefighters, expeditors and others who have established the networks and skills to get issues resolved and problems solved. Expert knowledge also may enhance the decision-making role of Six Sigma practitioners, change agents and statisticians.

Unlike power derived from charisma or knowledge, authoritative decision making relies on the inherent power of those in authority positions. Authoritative decisions come from management and are necessary for a modern bureaucracy to function properly. However, an over-reliance on authority-based decision making prompted the need for more participative management approaches that empower employees at all levels of the organization, according to business professor and author Edward E. Lawler. Six Sigma is one such power-sharing, participative approach to management, and as such, is not always compatible with authoritative-based decision making. Thus, while Six Sigma teams may utilize the expertise of organizational members and the perspectives of those in authority, they cannot rely on these sources as the sole basis for sound decision making.

Intuitive decision making includes many non-structured strategies for reaching decisions, Edward Lumsdaine and Monika Lumsdaine wrote in their book Creative Problem Solving: Thinking Skills for a Changing World. These include hunches, the “ah-ha” phenomenon, trial-and-error, guessing and experience. Everyone knows about these approaches because it is “human nature” to over-rely on them, even though scientific evidence has shown them to be both inefficient and ineffective. People are pattern creatures who make quick intuitive decisions based on previous experiences using both deductive and inductive reasoning. However, something other than intuition alone is required for process improvement projects. Intuition and experience about potential root causes and solutions are acceptable only as long as decision makers are willing to balance their preconceived notions and other possibilities with data.

Scientific decision making includes the analytical/engineering approach, the scientific method and the quality approach, according to Lumsdaine and Lumsdaine. The analytical/engineering approach models problems mathematically, while the scientific method tests inductively derived hypotheses with empirical data. The quality approach to decision making combines elements of the analytical and scientific methods into a data-based approach centered on the DMAIC methodology.

Originally, early practitioners of Six Sigma (circa mid-1980s) were schooled in statistics, research methods, and the validation of data and improvements. For many different reasons – e.g., economy of scale, lack of skilled practitioners, poor metrics, and the inevitable weakening of training and certification – the original statistical rigor has been eroded. To correct this movement away from data-based decision making, a number of suggestions should be considered.

Data-Based Decision Making

Usually a missing component to Six Sigma training is a clear illustration of the logical flow of data-based decision making. The flow chart shown in Figure 1 can be to illustrate the sequence of events necessary for data-based decision making.

Figure 1: Data-Based Decision Making Flow
Figure 1: Data-Based Decision Making Flow

The flow can be thought of in a simplified way as data –> information –> graphical displays –> decision making. Essentially, data-based decision making is using better pictures of data to make better decisions. It is what is added to graphical displays of data and data trends that separate them from other decision-making methods. It is specifically the fitting of distributions, establishment of probabilities and the addition of decision limits that enable better decision making.

The flow starts when validated data (baseline or historical) is collected. Since most people do not get much from spreadsheets of numbers, data summaries are created, i.e., measures of central tendency, variability, skewness, peakedness, percentiles and outliers). And since people usually do not like summaries of numbers any better then raw data, pictures of data are created to illustrate central tendency, i.e., “a picture is worth a thousand words” (or numbers in this case). From these graphical displays, one can overlay or fit known distributions. From these distributions, one can calculate probabilities. From the probabilities, one can create decision limits that allow better decisions.

Integrating Data-Based Decision Making into DMAIC

Data-based decision making is at the heart of Six Sigma and especially its methodology: Define, Measure, Analyze, Improve, Control (DMAIC). When followed correctly, the rigor inherent in the DMAIC process is not optional, as illustrated in the following discussion of each phase in DMAIC. Every Green Belt or Black Belt project is a comparative experiment. Usually, the goal is to shift a mean (hit a target, minimize or maximize) and reduce variability. It follows that to validate improvements at least two comparative experiments must be completed, one to test the shift in central tendency and one to test the reduction in variation. In addition, balanced metrics should be checked to make sure, for example, that cycle time is not reduced at the cost of increased rework. Working backwards from this requirement it is easy to see that it all starts with valid data.

Define Phase – The following questions are critical to the definition of the project and set the tone for data-based decision making: “Do I know what my “main pain” is?” ” Have I identified the process that creates this pain?” ” Have I identified balanced metrics to make sure that when I alleviate my pain I don’t shift it elsewhere?” During the Define phase, it is vital that teams identify the main pain and potential balancing metrics. The data-based decision making flow chart assures that this happens.

Measure Phase – The Measure phase is all about data-based decision making. The key output is a valid baseline of the main pain. To establish this requires several detailed steps. First, the main pain and balanced metrics identified in the Define phase are fully described on a data management plan. At the very least, the plan should include operational definitions, data collection and reporting information, measurement systems analysis methods, graphical displays, and stratification factors. Tables 1, 2 and 3 show a data management plan.

Table 1: Data Description

Metric

Operational Definition (Verbal)
or Formula (Symbols

Family
of Measure

Data Type

Setup Time (Min.)

Time elapsed between last good part to acceptance of new part

T

Continuous

Raw Material Dimensions (Indices)

Process capability Cp/Cpk per print

Q

Continuous

Table 2: Data Collection and Validation

Data Source or Location

Collector

Sampling Plan

Stratification Plan

MSA Plan

Log

Operator

1 / Setup

By Machine and Operator

Paired t-Test and Time Study

Tag

Receiving Inspector

1 / Lot

By Critical Dimension

Gage R&R

Table 3: Graphical Display and Project Validation

Cntral Tendency and Variation

Over Time

Main Desirability

Validate Project

Histogram

Individual and Moving Range Chart

Decrease Mean

Median Test

Histogram

Xbar and Sigma

More Capable

Median Test

Of course it is necessary that all data used for data-based decision making be valid. Repeatability, reproducibility, stability, linearity and bias should all be explored and documented. Due diligence in this step is mandatory and legally required in some situations with regulatory requirements. Another requirement of the Measure phase is to ensure that correct sample sizes are collected, economically and free of bias. Data summaries must be calculated and presented in graphical displays to better communicate present state. These displays should contain fitted distributions to establish probabilities and decision limits. All of this must be accomplished to baseline the main pain data for comparative experiments (before data) to be used later to validate improvements and, ultimately, the project itself.

Analyze Phase – In this phase, data-based decision making is embodied in the comparison of stratification factors to detect differences and identify potential root causes. For example, are two call centers different and if so, why? The use of analysis of variance and other comparative experiments to identify differences and potential root causes is often minimized in Black Belt courses and may be left out of Green Belt training. These represent critical blows to rigorous data-based decision making. In addition, within the Analysis phase, data-based decision making is embodied in descriptive, comparative and relational statistics to identify, quantify, and model potential root causes and solutions.

Improve Phase – Data-based decision making is used in the Improve phase to explore pilot improvements for shifts in central tendency and reductions in variability. With a little knowledge of research methods and some minor foresight (begun in the Define phase) better research can be conducted, such as the leveraging of control groups, randomization and bias reduction.

Control Phase – Besides validating the “goodness” of the before data, the second most critical step within DMAIC to ensure data-based decision making is the collection of “after” data to validate improvements for shifts in central tendency and reductions in variability. This is the proof. This is the better way – the better decision. One very simple question is answered: “Did we make a difference or not?” This is the reason for the statistical rigor. Can the project team, with confidence, look others in the eye and make claims to real process improvements? A second component of data-based decision making within the Control phase is the requirement to validate the sustainability of improvements. Unfortunately, this is often sacrificed in the interest of meeting tollgate reviews, deadlines and financial reporting.

Conclusion: What Needs to Be Done

Six Sigma has made significant contributions to how organizational members make decisions. It has allowed decision makers to move away from a reliance on decisions based on tradition, authority and intuition, and given them the tools and methodology to make better data-based decisions. However, rigorous data-based decision making in Six Sigma projects appears to be at a critical juncture. There has been a continual erosion of the teaching and use of valid data, rigorous and appropriate research methods, and statistical validation of improvements, savings and sustainability. It must be noted with concern that measurement system analysis (MSA) is increasingly left off or labeled as optional on some DMAIC roadmaps.

To reverse this trend, it is vital that adequate training time be devoted to the topics of MSA techniques and comparative experiments. All four MSA techniques should be taught and explored in a lab setting using experiential learning strategies. Technical training is usually long on lecture and software demonstrations, and often contains no lab component at all. Rarely do participants plan, conduct and interpret their own comparative experiments within a lab setting, making transfer to real projects difficult. Four MSA labs and at least a dozen comparative experiments should be conducted by Black Belt candidates to ensure even minimal proficiency. Every branch of the following comparative experiment of Figure 2 should be understood.

Figure 2: Comparative Experiment Decision Tree
Figure 2: Comparative Experiment Decision Tree

Because every Six Sigma project is a comparative experiment, every Green Belt should know how to conduct a before-and-after comparative experiment to detect a statistical shift in central tendency and reduction in variation. Even with Green Belts, there must be an emphasis on MSA, comparative experiments, research methods and sustainability skills, and less focus on one-shot case studies and simulations. Otherwise, a legitimate fear is that organizations and practitioners are allowing Six Sigma the pretense of data-based decision making, when in fact decisions are being made the pre-Six Sigma way – based on intuition, tradition or authority.

About the Author