Employees knew that the service in their third-party call center had deteriorated in recent years. Their job was to handle queries from independent business owners about financial services offered by the call centers client. As in many call centers, the job was considered highly stressful because of expected response times and resolution.
Initially, no one at the call center knew exactly how bad the problem was all they knew was that the client was considering canceling the contract. While certain data were collected (time that representatives were available to answer calls, hold time, etc.), there were no data connected to the goals of resolving 75 percent of the inquiries the first time around (first-call resolution) and 90 percent of inquiries within five days (five-day resolution). Furthermore, unknown to the call center, a key decision maker at the client was taking data on how many calls she received each week from people who were unable to get answers from the call center. The number of these so-called escalated calls had grown to a mean of 15 per week.
When faced with this kind of problem, many companies just lay off staff in an attempt to increase productivity of the remaining group, hire more staff without solving the underlying problem or try to improve results by forcing people to be on the phone more. This company took a different tack: They turned to a Lean Six Sigma expert for help with the goal of improving performance to a level that the client would find acceptable, and thereby assuring renewal of the contract.
In the course of doing a basic process analysis, the Lean Six Sigma expert discovered:
Furthermore, though the system collected a lot of data, it was unreliable and of little use in terms of understanding performance or making improvements. For example, the data was not random; rather, one person would be designated to track their calls one day and someone else would do it another day. Also, the company had never developed a good way to track notes that could be used for follow up, so no one could be sure if or when inquiries were resolved.
Baseline data showed that the company was falling far short of its goal, achieving only a 50 percent first-call resolution rate and 62 percent five-day resolution rate.
The Lean Six Sigma team approached the problem in the beginning using standard methodology. One of the first challenges was creating a useful measurement system, including developing and testing operational definitions for first-call resolution and five-day resolution.
The first measurement system analysis (MSA) testing the application of these definitions failed reproducibility. This failure actually was a major moment in the life of the project. All of the operators had done the job for at least a year, yet each had a different interpretation of what was acceptable – even after working together to create the operational definitions. More precise definitions followed, a second MSA was successful, and measurement could begin.
As expected, the hypothesis tests demonstrated a negative linear relationship between the five-day resolution outcome metric and the available-to-answer process metric. Specifically, the data showed that the representatives with the highest available-to-answer rates had the lowest resolution rate. Interestingly, there was no relationship (r-squared of less than 20 percent) between available-to-answer and first-call resolution so just increasing the time that people were available to answer calls would not necessarily drive up the first-call resolution rate.
Using direct observation, random listening to calls (recorded so as not to skew the activity on the call) and fishbone diagramming and analysis, some root causes began to surface.
To better meet the needs of the process, the work unit decided to:
As shown in the following figure, these changes had an immediate and dramatic impact on performance. Within weeks, the first-call resolution rate rose to from 50 percent to 90 percent and the five-day resolution rate rose from 62 percent to 98 percent. Equally important, the secret escalated calls metric that the client was keeping on how many calls she got per week dropped from 15 per week to fewer than 1 per month.
The client who once described this company as the worst service provider Ive ever known later wrote an article for an internal newsletter celebrating the providers improvement.
Call centers are not only ubiquitous, but also a hot bed for customer dissatisfaction. The performance can make or break any service providers indexes of customer loyalty. Only by honing in on what the client needs (or another department, if the call center is internal), building a process around those needs and collecting measurement on key factors can a call center be an asset to the organization as a whole. At that point, other data (why people are calling, utilization of its associates, hold times, etc.) become a critical and trusted part of understanding the entire enterprise.
Customer service representatives need to be able to answer the phone. They need to resolve questions quickly. Hold time needs to be minimal and at or under the customers expectation. Yet these important metrics, taken alone, with little or no regard to other client-affecting service level indicators, can lead to a loss of business.
While this project was not solely responsible for saving the contract, within four months of this projects completion date, the client renewed the existing contract for five more years.