It is not unusual to find that a few years down the road, the results from a Lean Six Sigma deployment are not quite as good as what a company hoped or expected. What often helps in that situation is a formal deployment review, conducted in much the same way as a tollgate review on an individual project.
The basic model for a deployment review is the same as for any review. The organization’s leadership will want to document both results and methods, as depicted in the simple grid shown in the table below. This will make it clear whether progress has been hindered by a breakdown in strategy and planning (the methods), in the execution of those plans (the results) or a combination of both.
|Deployment Review Grid|
Did the Organization Follow Its Deployment Plan?
Did the Organization Get the Results It Wanted?
|Document the successes.|
The methods the organization thought would be successful were not. Determine what went wrong
|Desired results obtained despite not following the planned methods. What was missing or wrong in the plans? What new knowledge allowed the organization to get good results?|
Determine why the plans were not followed. Were they poorly documented? Incomplete? Were insufficient resources (time, money, personnel) allotted to the effort?
What exactly should be looked at in terms of results and methods? Successful deployments are the result of the interaction of three elements – strategy integration, execution and skilled infrastructure (see figure above).
As the figure shows, it is clear that the three elements of deployment drive each other in particular ways. Strategic clarity gives the focus needed for effective execution; having skilled resources is necessary to ensure that the strategy is appropriate in the first place, that its interpretation is consistent, and that the organization has the necessary level of commitment needed to sustain the effort.
Targeting the deployment review on these areas may not catch every roadblock or barrier, but should expose those are the most harmful in terms of results.
Strategy Integration Review
Translating the table grid at this level means determining whether the organization had a clearly defined strategy associated with its Lean Six Sigma deployment, and whether both the hard and soft results are documenting.
> What was the burning platform driving Lean Six Sigma deployment?
> Was the strategic importance of the goals clearly communicated to all involved?
> Was progress regularly reviewed against the goals?
> How does the organization ensure that project selection is driven by priority business needs?
> Is there reliable data on performance before and after changes were made? (Sometimes good results are obscured by poor data collection; or vice versa, bad results can look good if data is unreliable.)
> What documentation does the organization have showing progress toward or shortfalls relating to strategic goals?
Skilled Infrastructure Review
What has made Lean Six Sigma much more successful than its predecessors is having an implementation infrastructure that effectively translates the strategic agenda into actions to maximize value and provide effective management and monitoring of results.
> Is there a deployment Champion in each business unit? What training and experience does he or she have?
> Have project sponsors been trained in Lean Six Sigma basics and project review basics?
> How good are the mechanisms in place for connecting resources working in terms of support and communication? What is the basis for this judgment?
> What evidence does the organization have that Lean Six Sigma is becoming integrated into the daily management practices of the business?
> What is being done to ensure that deployed resources are carrying out their responsibilities? (For example, are they held accountable for both methods and sustainable results?)
> What targets for achieving a critical mass were set around the deployment methods? (Number of Black Belts trained? Number of projects launched?) How do the actual numbers compare to the target? What barriers explain any shortfalls?
> Is there a rigorous process for measurement and tracking of project financial results?
> What is the average return per Black Belt? How does that compare to expectations?
The first two elements of the deployment review focused on developing a fully realized strategy and the resources to implement it. In this element, an organization needs to look at where and how execution either succeeded or failed.
> What made the organization confident that its expectations were reasonable? Check that those factors are still valid. If it turns out the expectations were un-reasonable, what needs to change – the expectations or the methods used to fulfill them?
> What was done to engage leadership and give them a compelling reason to embrace Lean Six Sigma?
> Were Black Belts and Green Belts fully trained in integrated Lean Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control) methodology?
> Is team leadership training a standard part of the organization’s Lean Six Sigma curriculum? (Change and improvement happen through people, not just statistical tools. Black Belts and Champions must be able to draw the best from their teams. This requires that they be trained in team leadership skills as well as technical tools. Having the right teams and team dynamics in place for learning and execution can both accelerate the application of Lean Six Sigma and multiply its financial and operational benefits throughout an organization.)
> How are Champions and Master Black Belts doing in terms of opportunity identification, project selection, and prioritization? How does the organization’s leadership know?
> Does the organization have a project management system that allows it to control numbers of projects-in-progress so staff is not overloaded?
> Have project Sponsors taken on the accountability for long-term results? What indicators are there that this is happening or not? Is there a method for financial validation of results after projects are completed?
> Is there consensus on common metrics and tracking methods? What dashboards metrics are reviewed regularly? Are they helping in the management of the deployment?
> What hard results were expected from the deployment (quality, cost, profit, revenue, speed, etc.)?
> What results were actually achieved?
> How well do team leaders, Black Belts, Master Black Belts, etc. rate in terms of leadership skills?
Conclusion: Who Reviews and When
Keep in mind, that although each of these elements was addressed separately, in reality there will be overlap between them. That is okay because organizations typically will compile all the lessons together at the end so they can look for patterns across elements. If progress is not being made toward strategic goals, for example, it could be that there is a problem with project selection and prioritization, the numbers of projects launched and completed, or the resources used to support the projects (such as if Black Belt time is limited).
In practice, these reviews are often done through a combination of meetings (with the leadership team, Champions, Sponsors, Master Black Belts, etc.), reviews of project documentation, and internal surveys or focus groups that ask people about barriers and successes.
Most best-in-class companies perform in-depth deployment reviews annually. The trick is to find a balance between getting enough depth so that big course corrections can be made when needed, but not spending so much time and effort that it slows down the initiative. The stakes are high, so spending time and money on doing a good review is well worth the effort.