This article illustrates the Six Sigma DMAIC process using an organization that develops software packages as an example. The Six Sigma DMAIC approach to process improvement provides a powerful mechanism for improving the software process. Typical benefits will exceed costs within 6 to 12 months from initiation of a Six Sigma program for software development, and the on-going return will be very substantial — often a 15-25% reduction in software development costs in year two, with continuing reductions thereafter.
The Analyze Phase
Analyze is a seven step process:
- Measure the capability of the existing process
- Refine improvement goals
- Identify significant data segments / patterns
- Identify possible Xs.
- Identify and verify critical Xs
- Refine the financial benefit forecast
- Analyze Phase Tollgate Review
Determine/refine measurement of process capability of the existing process
When we examine the data we have collected during the Measure phase we see that it reveals that using the customer’s dates we really have a 20% on-time rate (our ‘baseline’), not the 62% figure we got from the software team. We can convert this information into one of the several available standard Six Sigma measures of capability (Defects per million opportunities [DPMO]; ‘z-score’; Sigma level; Cp; or Cpk).
We won’t go into the mathematics of these here (you can find more on this elsewhere on this site). In this instance we would probably choose Cpk (‘worst-case’ capability) as the most suitable choice, and we would find that the value we get is less than .2 – not very good! We would like to see a value of at least 1, and higher would be better.
Refine improvement goals
With this knowledge in mind, we may want to re-evaluate our goal. If the goal is to remain 90% that means we are targeting an improvement of 450%! While this may not be impossible, a single intervention is unlikely to produce a gain of that magnitude so we may wish to set a target that is more realistic and attainable in the near term, within the scope of our current project. When the gap is this large it is likely we will need a series of Six Sigma projects to close it – usually a better choice than one very large project. We may choose to keep 90% as our stretch target, but we should not be disappointed if our achievement on the first project is somewhat less – the payoff may still be very substantial.
Identify significant data segments / patterns
As indicated earlier, we could segment the data by software group or by software type – if we did so we would follow the pattern of analysis discussed here for each segment independently. In the interest of keeping this easier to follow we will focus on the single segment of data shown earlier – as already mentioned, we do notice a pattern in this data. Most of the project outcomes seem to be related to the planning best practices attributes reflected in the data we collected, but there are five ‘outliers’ that seem to be influenced by one or more other factors.
Identify possible Xs
Our observations about the pattern in the data lead us to the next step in our analysis. What other unidentified factor(s) might explain the outliers we have observed? One of the factors influencing software project outcomes is the schedule itself – when we start with an unrealistic schedule, bad things often happen.
This gives us a clue that the realism of the planned schedule could be one of the factors that explains the outliers we observed. We can investigate that hypothesis by collecting an additional piece of information about each of these projects – i.e., how did the planned schedule compare to industry norms for similar projects? One way we might answer this question would be to use one of the commercially available software project estimating models. Note that there are a number of complicating factors relating to use of these models that we won’t go into here – that topic will be addressed in a future article. For now, we ask that you accept our assertion that this can be done.
We gather this additional piece of information about each of the 20 projects and add it to our data table – we’ll call the new column “Plan %” which we define as (actual planned duration) / (duration indicated by the estimating model). This means that when we have a value less than 100% our planned schedule was shorter than that given by the model – in other words, unduly optimistic. This results in a new table that looks like this:
Although we now have what seems to be a good candidate list of controllable factors, we may want to probe a bit deeper to see if we can discover the underlying ‘whys’. Why did we not break large tasks down into one-week segments? Why did we not define predecessor/successor relationships among our tasks? One of the Six Sigma tools, known as the “5 Whys”, encourages us to probe deeply by asking why five times in an effort to get to the real root of the problem. To illustrate:
Why don’t we define predecessors?
– we didn’t know it was important
– why? – no training was provided
– why? – no training budget
– why? – manager didn’t think it important
This analysis tells us something about issues we will need to address to make an improvement stick.
Identify critical Xs
With this data in hand we can determine which of these factors are the most influential in determining the schedule performance outcomes. One way we can do that is to use Multiple Regression analysis. Again, we won’t go into the details of how to do that, we’ll just go direct to the conclusions we can draw from such an analysis.
The conclusions we reach from analysis of this sample indicate that about 78% of the variability we see is ‘explained’ by three factors (the Xs) – Task Duration, Predecessors, and Plan %. Hence our project (likely this is really 2 projects – one to deal with the planning process and one to deal with the estimating problem) will focus on actions we can take to improve our control over these Xs.
Refine the financial benefit forecast
In order to determine what improvements like this would be worth to the business we might go back to the business cases for our sample project to make an estimate of the opportunity cost to the business resulting from the delays. To illustrate, we might find that the average first year business benefit expected from these 20 projects was $ 850,000. We know from our data that on average our projects are planned to take 15 months, and that on average we actually require 134% of the planned duration – hence on average we are 5 months late.
Given the average delay and the average first year business benefit we may estimate the opportunity cost as 5/12 times $ 850,000 ($ 354,000) times cost of money – at 15% the is about $ 55,000 per project, or over $ 1,000,000 total for our 20 projects if they could all be delivered on time. Our target is 90% on time, so we might reasonably expect a benefit of around $ 900,000 if that target is achieved.
As suggested above, it appears that we will really need 2 different projects to accomplish our goal, so we might say that our expected annual benefit for each project is actually $450,000, less whatever it may cost us to do the project. Experience indicates that we can do projects like this for far less that the expected benefit – $100,000 or less per project might be a typical cost.
The Improve Phase
We will continue our example on the assumption that we have decided to spin off the effort to improve our estimating as a separate Six Sigma project – we will follow that thread in a future article. Here we will focus only on the professional planning practices.
Improve is a 5 step process:
- Identify Solution Alternatives
- Tune/Optimize Variable
- Relationships Between Xs and Ys
- Select / Refine The solution
- Pilot or Implement The Solution
- Improve Phase Tollgate Review
Identify Solution Alternatives
There appear to be three obvious options in the case – we could (1) train all of the people responsible for project planning on best practices, (2) assign mentors or coaches from the Project Office to review the draft plans and help project managers bring them up to the best practice standard, or (3) use some combination of these options.
Select / refine the solution
Here we will evaluate each of the solution alternatives with respect to applicable effectiveness criteria. In this instance we will consider the cost of each option, how effective we believe it will be, and perhaps the lead-time required to implement. Most likely we won’t have any real data about relative effectiveness, so we likely want to pilot two or more of the options to evaluate relative effectiveness. That leads us to the pilot step – after we have the results of the pilot we will make a final selection.
Pilot test or implement the solution
We may decide to try each of the alternatives with a different team, and do a review at the end of two or three months to see how each of the pilots is working out. To do this we might, for example, score the plans these teams have produced using the same approach applied to our historical data. If one method shows a meaningful (positive) difference we most likely select that option if it is reasonably in line with the second best option with respect to cost and lead-time.
The Control Phase
The purpose of the Control phase is to make sure that our improvements are sustained and reinforced. We want to be sure we put in place all of the actions that will help the change be both successful and lasting.
Control can be described as a 5-step process:
- Develop Control Plan
- Determine improved process capability
- Implement process control
- Close the project
- Control Phase Tollgate Review
Develop Control Plan
The control plan will define how we will monitor the Xs and the Y, and what actions we will take if these metrics indicate we have strayed from our planned levels. We will also specify what action is to be taken if the metrics are off target.
In this instance we may decide that each project plan will be scored at the beginning of each project phase, and any that scored below a ‘5’ on any of the Xs will be required to make changes to bring the plan up to that level. We might also indicate that we will require a special project review if the target for the Y is not met. The goal of such reviews will be to discover why the goal was not met, and to institute corrective actions as necessary.
Determine improved process capability
Here we document the new level of performance of the selected success metric.
Implement process control
We have defined what we will monitor, who will do it, and how often. In this step we simply execute our control plan.
Close the project
Closing the project includes a formal transfer of responsibility from the Six Sigma team to the operational personnel who will sustain the process. As part of the closing process the team will archive all of the project records and data, and will publicize lessons learned and successes.
Six Sigma Process Improvement – Engaging the Team
Improving a process, like building character, can be done by the people involved, but not to them. Hence in Six Sigma we engage and empower the people who perform the software processes to plan and implement improvements themselves, with the guidance and assistance of Six Sigma specialists who are fully versed in software development best practices (both sets of knowledge are critical to success).
This requires a fundamental change in the way most software people view their jobs.
The Six Sigma DMAIC approach to process improvement provides a powerful mechanism for improving the software process. Typically benefits will exceed cost within 6 to 12 months from initiation of a Six Sigma program for software development, and the on-going return will be very substantial – often a 15-25% reduction in software development costs in year two, with continuing reductions thereafter.
In order to realize these gains it is essential to recognize that a significant cultural shift must occur. Achieving this cultural shift is best accomplished by providing Six Sigma training for all of the senior developers and managers in the software organization, with a mix of Champions and several levels of Six Sigma specialists (“Yellow Belts”, “Green Belts”, and “Black Belts”) appropriate to the size of the organization.