This combination “case study and tutorial” tracks the inception of a Six Sigma DMAIC project by an IT support business. The project is aimed at helping the company become more competitive and profitable. Each part follows the project team as it works through another stage of the DMAIC methodology. Click to read the rest of the case study: Part 1, Part 2, Part 3, Part 4 and Part 5.

The Six Sigma project team reached the final step in making significant improvements to the operation and profitability of the call center of the IT services business. After the company’s senior leadership did the pre-project work, the team followed the first four steps of the DMAIC (Define, Measure, Analyze, Improve, Control) methodology and entered the final phase. The DMAIC roadmap called for work in these areas during the Control phase:

C1. Develop Control Plan: Include both management control dashboards that focus on Y(s) and operational control indicators that monitor the most significant process variables, focusing on the x’s.

C2. Determine Improved Process Capability: Use the same measures from Define and Measure in order to provide comparability and monitor impact in a consistent way.

C3. Implement Process Control: Create, modify and use data collection systems and output reports or dashboards consistent with the control plan.

C4. Close Project: Prepare the implementation plan, transfer control to operations, conduct project post-mortem, and archive project results.

C1. Develop Control Plan

The Control plans addressed two views – one concerned with management control and the other with operational control. Management control includes a focus on the Y‘s or outcomes of the process and often some of the X‘s as well. The level of detail was decided upon based on the interests of the specific managers concerned – some want a lot of detail, some much less. Hence, the management control plan needed to consider individual preferences so as to deliver enough – but not too much – information.

The operational control plan was more concerned with the X‘s that are predictive of outcome Y’s. Operational control information included both controllable and “noise” variables. Operational control information was provided more frequently than management control information.

Both types of control information pertinent to this case study are illustrated in step C3.

C2. Determine Improved Process Capability

The team linked the capability of the improved process to the baselines and targets identified during Define and Measure. It was important to use the same measures. (If it was necessary to change the measures, then baselines and targets would have had to been restated in those terms to enable comparison.) Many different statements of capability were considered, including mean/median, variance, Cp, Cpk, DPMO, sigma level, percentile rank, etc. The team knew that to a great extent these alternate characterizations are equivalent and the choice is largely one of preference. However, the team made its choices so that all concerned could have a common understanding of the meaning of the measure. The table below is the way the team chose to present the following data.

Measure Baseline Target Current
Business Growth 1 percent 3 percent Requires more time to measure
Customer Satisfaction 90th percentile = 75 percent satisfaction 90th Percentile = 85 satisfaction Need more data
Support Cost Per Call 90th percentile = ~ $40 90th Percentile = $32 ~ $35
Days to Close 95th percentile = 4 says 95th Percentile = 3 days or less 3 days
Wait Time 90th percentile = 5.8 minutes 90th Percentile = 4 minutes or less 4.4 minutes
Transfers 90th percentile = 3.1 90th Percentile = 2 or less 1.9
Service Time Mean: 10.5 minutes
StDev: 3.3 minutes
Mean: StDev: Mean: ~ 8.8 minutes
StDev: ~ 0.9 minutes

The first current performance values were prepared from the pilot program results with plans for updating monthly or quarterly thereafter. To determine the initial values, the team considered the following:

Customer Satisfaction Percentile – The pilot data indicated an improved customer satisfaction at about 77.5 versus a baseline estimated to have been 70 to 80 percent. The team recognized this was a very small sample, so it decided not to make a claim on this until more time elapsed.

Support Cost – Using the analysis below, the team determined that the 95 percent confidence interval for support cost per call in the improved process was $33.50 to $33.90 versus about $37.00 for the baseline. The p-value indicated this change is significant. The team used Minitab to calculate the 90th percentile value as $34.94.

Support Cost Baseline

Support Cost Improved

$37.50

$33.40

$36.00

$34.00

$38.40

$33.50

$40.00

$33.90

$39.90

$33.50

Support Cost Per Call Improved Process
Percents calculated by Minitab percentiles macro

75 percent
80 percent
85 percent
90 percent
95 percent

$34.60
$34.76
$34.90
$34.94
$35.14

Days to Close – Using the macro illustrated above, the team determined the 95th percentile value for the improved process days to close during the pilot was 3 days.

Wait Time – Although the data to determine the baseline value was not initially available, it was determined based on additional data collected and analyzed during the Measure and Analyze phases. The baseline was 90th percentile = 5.8 minutes, and the improved capability 90th percentile was 4.4 minutes.

Transfers – The team determined the 90th percentile baseline to have been 3.1 or less and the improved process value was 1.9 transfers.

Service Time – Baseline mean service time was 10 minutes with a 95 percent confidence interval of 9.7 to 10.4 minutes, while the improved mean was 8.8 minutes with a 95 percent confidence interval of 8.6 to 8.9 minutes.

Figure 1: Summary for Service Time Baseline
Figure 1: Summary for Service Time Baseline
Figure 2: Summary for Service Time Improved Process
Figure 2: Summary for Service Time Improved Process

 C3. Implement Process Control

The team began by planning the data collection process to be used, including preparing operational definitions for each data element and automated tools whenever possible to minimize expense and effort. Heeding W. Edward Deming’s message to “drive out fear,” the team was careful to prepare a well-thought-out communication plan to ensure the staff knew how the data was to be used and to address any concerns about punitive uses of the data. The team recognized that if members of the staff thought the data would be misused, they might be tempted to distort the data.

The team also verified that the process was under procedural control (i.e., standards and documentation were up-to-date and the staff understood and followed the intended process). In preparation for implementing control charts on some of the process variables, the team recognized the segmented some of the data, such as “issue type.” Significant variations were expected across, but not within, issue types (e.g., “problem” versus “question”).

The team selected the appropriate form of control chart to suit each situation to be monitored. (Figure 3)

Figure 3: A Control Chart Chart
Figure 3: A Control Chart Chart

One of the control charts the team implemented (Figure 4), monitored days to close for issue type = Problems. Similar charts were prepared for other issue types and for support cost.

Figure 4: Days to Close – Problems
Figure 4: Days to Close – Problems

Implementing process control means more than preparing control charts. The team also defined a process for analyzing the output in order to help the operations group determine the right course of action when an out of control situation is encountered. One of the tools they used was the Minitab “data brush.” This tool isolates potential “special cause” data.

The team deployed a full set of control charts that monitored all of the variables of interest, but it also recognized a need for an overview to give a broader perspective without all of the detail. To satisfy that need, the team designed two “dashboards” for use by executive management and the call center. These dashboards overlap somewhat.

The team knew that new account growth and customer satisfaction were important to the senior vice president who runs the business unit served by this call center. The team recommended changes that were expected to impact these outcomes, so the vice president wanted to monitor what actually happened. He also wanted to see how these changes were impacting the cost structure.

The dashboards, one for the vice president and one for the call center manager, reflected both x’s (leading indicators) and Y(s) (trailing indicators).

C4. Close Project

The team’s final effort was aimed at wrapping up the project and transferring control to the call center group. This last step included:

  • Developing and executing a plan to implement the improved process, including any necessary training.
  • Developing and executing a communication plan that informed all those affected by the change.
  • Conducting a transition review with key managers and staff, making adjustments and improvements they suggested.
  • Establishing the timeline and responsibilities for the transfer, and executing the transition process.
  • After an agreed interval, validating the financial benefits in conjunction with a representative of the finance department.
  • Conducting a project post-mortem from multiple perspectives – the team, the Champion/sponsor, and the financial results. (Emphasis on process improvement, not critiques of individual performance.)
  • Archiving in an accessible repository what the project team learned so other teams can benefit from it. (Special emphasis on items that have potential for re-use, and a “road-show” or poster presentation to communicate project results.)
  • Celebrating! Along with well-deserved acknowledgment of team contributions (both the Six Sigma project team and the operations team).
About the Author