The sales pitch for a new tracking software sounded perfect to the management of an internal call center for a mid-sized financial institution. The call center handled technical problems and policy issues for branch offices across the country – though it was not doing a very good job at either, judging by the increased complaints and staff hours. The new software, the provider promised, would repay the $500,000 investment within six months to a year by reducing complaints, shortening the time it took to answer calls and eliminating the need for some staff positions.
So the company bought the software, had it installed, taught people how to use it. Then the company waited for the results to show up on the balance sheet. Trouble was, months went by and costs in the center remained high. Counting the half million dollar investment in software, the center was significantly over budget, and the senior manager was still getting complaints about poor service. For example, it was common practice among branch employees that if they did not get a clear answer or if the call center staffer sounded confused, they would just hang up. Then they would immediately call back so they would get routed to a different call center staffer who almost certainly would give them a different answer. This was a clear indication of inconsistent service quality as well as artificial call volumes.
The company decided to use the Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control) methodology to resolve the problem.
The senior manager decided to commission a team to study and improve the problem. The team’s goals were to (a) reduce the average handling time, (b) reduce the number of unnecessary calls, (c) achieve better consistency in answers given to the branches. The desired financial return was minimally to recoup the $400,000 to $500,000 it cost to install and implement the new software.
Though the new software program had not yet delivered the promised results, the team discovered that it did have several key features:
1. It tracked the duration of every call: That meant the team was able to gather data to establish baselines and judge whether any changes had the desired impact. Creating control charts of average handling time (AHT) for policy and procedure (P&P) calls and for system support (SS) calls allowed the team to evaluate process capability. The members found that both measures were significantly over the targets.
2. It allowed calls to be coded according to the type of question:This made it possible for the team to pinpoint the types of problems that generate the most calls and focus on reducing those calls first – a Pareto approach. Unfortunately, the team found when it began the call center project that about 40 to 50 percent of the calls were being coded as “unknown.”
The team immediately realized that having a high level of calls being coded as “unknown” was trouble. It was impossible to find solutions to a problem that was “unknown.” So the team launched its Analyze work by digging into the reasons why the “unknown” category was used so often. Many of the call center staff used the “unknown” category as a work-around to avoid procedures they did not understand or that contradicted what they felt was the best way to handle an issue. Tagging a call as “unknown” allowed call center staffers the freedom to more or less do whatever they wanted to do.
Immediate steps were taken to improve the use of the software package. The team charted the number of “unknown” calls on an eight-foot-high board installed at the entry of the office building. Having a simple visual signpost helped raise awareness of the importance of reducing the number of “unknown” calls.
The team also discovered that few people referred to the operating manual for the software because it was an enormous, unwieldy binder.
Another related line of investigation compared the average handling time to the number of “unknown” calls. In Figure 3, the project team stratified this data by color coding it according to the work groups of the individual call handler. There was no clear pattern in the data, but this analysis did allow the team to recognize the call handlers with both low average handling time and low “unknown” figures as top performers. The importance of this became evident in the Improve phase. The team also created a similar chart that displayed how much handling time was spent on the different types of calls. That allowed the team to find the critical few types of questions that took the most time.
The project team launched two efforts to help reduce the number of “unknown” calls and improve call consistency:
A third effort was launched to reduce the types of calls that were received most often. To address the problem, the project team expanded to include associates from the branches. The enlarged team then brainstormed ideas to reduce the most common call types. As a result, some new documentation was added to products to clarify instructions, and information was added to certain computer screens within the system to reduce questions and confusion. The team saw immediate improvement from these actions (Figures 4 and 5).
In addition to the average handling time improvements, were these:
As the average handling time dropped and service improved, the call center was able to reduce the workforce. (This was done without layoffs, through attrition.) Within the first year, the actual cost savings was $732,000, well above the initial target of $500,000.
The Control phase of the project emphasized:
The team also created plans for continual monitoring of which call types consumed the most handling time, along with continuing the collaborative efforts with branches to address those calls.
Lessons that the team documented as advice to other project teams included: