There has been a buzz about DMAIC, and it goes something like this: “Because DMAIC is for reactive problem solving, and companies are getting more proactive and oriented to Design for Lean Six Sigma (DFLSS), what is going to happen when the reactive problems go away? Won’t the use of DMAIC dwindle or disappear?”
The quick answer: “No, it won’t.” There is no real danger of reactive problems going away. But there is another answer worth exploring that calls for a broader view of DMAIC. With a more flexible view of DMAIC in hand, practitioners can revisit the question about DMAIC’s future and discover long-term uses for the approach in control, monitoring and design projects.
A Little DMAIC History
A discussion about the future of DMAIC should include at least some background about its history. Not everyone remembers that DMAIC was not part of the original Six Sigma at Motorola in the late 1980s. The company’s first improvement model was the simpler Six Steps to Six Sigma, which can be summarized:
- Identify the product created or the service provided.
- Identify the customer(s) for the product or service, and determine what is important to them (requirements in measurable terms).
- Identify your needs in order to provide the product or service.
- Define the process for doing your work.
- Mistake-proof the process and eliminate defects and waste.
- Ensure continuous improvement by measuring, analyzing and controlling the improved process.
While this reveals some early inklings of DMAIC, the approach was introduced later. DMAIC added important detail and rigor, and has proven to be very robust. Even when translated into many languages, with hundreds of training variants around the world, teams have been able understand and use the steps and tools to get meaningful results. While most companies first use DMAIC to solve their most pressing problems, this examination will cover it more broadly as a “weakness-based” improvement approach.
There are several contrasts between the “strength-based” improvement that was in play at Motorola and many other companies in the 1980s and the “weakness-based” approach that Six Sigma ushered in (The definitions of these terms and working examples of each approach are listed in the table below). Before Six Sigma, Motorola measured many manufacturing processes using yield, which simply counts good units exiting an area as a percentage of the number of units that were started in production, accounting for defective units. Six Sigma shifted the focus to defects within units. That is a shift from strength-based to the more detailed weakness-based view.
|Implications of Weakness Vs. Strength Orientation|
|Measures||Defects, complaints||Yield, customer satisfaction|
|Nature of the Data||Diagnostic detail (defects within units)||Performance roll-up (good vs. defective units)|
|Project Orientation||How to reduce complaints?|
-Facts and data, looking to past and present
|How to improve customer satisfaction?|
-Ideas, looking to the future
|Implications on Communication with Management||“What are the defects?”|
Bad news and details are OK (the boss gets important details)
|“How is the yield looking?”|
The boss wants good news (that’s what they’ll get)
A common first reaction to these concepts: “What’s the difference whether I have a project that sets out to improve yield or one to reduce defects? Aren’t these just two ways to say the same thing?” Yet there are some big implications that follow when a team or company chooses one or the other of those perspectives.
As the table above outlines, like most strength measures, yield and customer satisfaction scores summarize performance, are general in detail and lagging in nature. If yield falls, say, from 98 percent to 80 percent, the strength metric does not do much to tell practitioners what is wrong or where to focus improvement. Weakness-based measures like defects and complaints usually provide more detail and diagnostic help. In a yield computation, each defective unit may have multiple and different defects within the unit that made it fail, and in a customer satisfaction roll-up, each unsatisfied customer may have multiple and different complaints. Tracking counts and changes in the types of defects and complaints offers more sensitivity and early warning about performance gaps – and more info about where the specific problems and potential causes lie.
DMAIC is weakness-based and it supports and encourages that detailed diagnostic thinking. While it is used mainly for problem solving, DMAIC thinking and tools help practitioners to monitor and act on diagnostic detail in the ongoing life of a product or service.
Using DMAIC to Monitor and Control for the Long-term
All products, processes and improvements, once implemented, require some level of monitoring. Environments, users and related systems outside the practitioner’s control can change enough to reduce the intended value or introduce new risk. Even a company that roots out all reactive problems and becomes proactive and design savvy will need a systematic way to assess that they are monitoring data and acting on it quickly and wisely. This is a long-time job for DMAIC. The need for diagnostic, weakness-based measures, and a process for using them to drive quick analysis of causes and drivers, will call for DMAIC tools and thinking, even if reactive problem solving were to fade away.
Stretch DMAIC in a DFLSS Environment
More DMAIC practitioners may find themselves working in environments that include design of new products and services. Their colleagues may be using a variant of DFLSS, such as IDOV (Identify, Design, Optimize, Validate) or DMADV (Define, Measure, Analyze, Design, Verify). While each approach has clear distinctions at extreme ends of the spectrum, projects often fall in between pure DMAIC and pure DFLSS (see figure below). For some of those in-between cases, DMAIC can gracefully stretch to cover projects where some new design thinking was needed in concert with improvements to an existing product or service. While DFLSS has important and unique tools for physical product design, teams in services and software environments may find a tailored DMAIC suits their blended needs. Some examples of stretching DMAIC:
- An expanded view of requirements discovery in Define
- The essence of quality function deployment as a table consolidating what would be multiple Y to x trees in Measure
- More emphasis on robust design and model-building in Analyze and Improve to add the horsepower needed to move DMAIC far enough in the direction of Design
It would be nice and simple if all projects lived in the corners of Figure 1 – but real-world projects often fall in the middle of this square. While classic DMAIC will continue to help combat reactive problems for a long time, even beyond that use, DMAIC thinking and tools have a long future in the areas of monitoring and control and in improving and redesigning projects.