TUESDAY, JULY 25, 2017
Font Size
Operations Call Centers The Futility of Call Center Coaching

The Futility of Call Center Coaching

One-agent-at-a-time coaching is the go-to method in call centers for trying to improve center-wide output measures. But it is less valuable than many believe. Through the use of mathematical modeling and simulation, it is possible to see that coaching, even in moderate turnover environments, does not offer a return on investment (ROI). More-effective improvement strategies aimed at lifting the performance of the whole system, such as task consolidation and prerecorded call flows, should be used instead.

Coaching in Error

Dr. W. Edwards Deming, a 20th century quality guru, warned managers about an inference error pertaining to the performance of a team, unit or organization. This is the error: Because coaching may help an individual improve and because the performance of the system is the sum of each worker’s performance, coaching each worker will improve the performance of the system. Deming believed this was faulty logic.

Find Out More

More information on call center improvement can be found in a related article, “Cutting-Edge Methods Help Target Real Call Center Waste” by Dennis Adsit.

In Deming’s view, most coaching efforts are a form of tampering because they try to make improvements to individual components of what is largely common cause variation. He argued that the overall performance of a unit was much more a function of the quality of materials, process design, specs and machine performance – in other words, the “system.” Deming went on to demonstrate that the result of an improvement strategy based on coaching each employee would be no system improvement; rather, it would simply be increased variation in performance. He encouraged management to find ways to lift the performance of the whole system.

There is another practical reason there can be no ROI from investments in coaching: turnover. When turnover is high, every month and year workers that may have improved from individual investments walk out the door. As these people quit, they are replaced with new, lower-performing employees.

But many managers still believe in the efficacy of coaching, especially in the call center industry. Centers pay to record phone calls and do sample monitoring for use in sporadically coaching agents – despite the high rate of employee turnover.

Background for the Model

A mathematical model can be used to show the inefficiency of coaching individual call center employees. To understand the model, it is important to be aware of the variables that were in place when the model was formed.

This model starts with 100 new agents in a call center. Performance is measured by the percentage of customers who give an agent a 5 on a 5-point scale (referred to as “percentage top box for customer satisfaction” in the model). Because no agent can be perfect and because agents must occasionally give an answer the customer will not be satisfied with, the top performance used in the model is 85 percent. The 100 agents start with a mean performance of 66 percent, with a standard deviation of 2 percent. No new agent starts at less than 60 percent; if they ranked lower than this on customer satisfaction, they would not have graduated from the initial training program.

The model is measured over 72 months, or iterations of the system. During each iteration, some agents improve because of coaching and some agents quit and are replaced with new agents. For the agents who stay, if their tenure is less than six months, they improve by 2 percent each month. If their tenure is greater than seven months but less than 12 months, they improve by 1 percent each month. The amount of improvement continues to halve every six months, following a declining exponential function. In my experience as a call center leader, long-tenured agents are not very responsive to coaching. Their approach and work habits are too ingrained and the amount of coaching available for these experienced agents is too limited.

The turnover rate in the base model is 36 percent annually, or 3 percent per month. This turnover is random, such that each month a new agent and an experienced agent are equally likely to quit; few make answering calls a career.

The variables used for top performance, mean performance and turnover rate can be adjusted by different centers using this model.

Constructing the Model

With the variables in place, it is time to look at the math used to construct the model. The distribution of tenure at month M, with a monthly turnover rate of T, was found using a Markov chain. The following matrix works on the assumption that turnover is constant and is not a function of tenure:

The “percentage top box customer satisfaction” (CS) of an employee with t months of coaching is the sum of the contributions from coaching (Ce), which can be expressed as follows:

where H is the half-life of the effectiveness of training, and CL is the limit of coaching.

The limit of coaching is the best possible customer satisfaction rating that the given employee can reach, which can be set anywhere. If turnover is set to 0, the system will approach CL as the limit.

CS can also be expressed in terms of the employee’s initial performance, M0, as follows:

Plugging the fourth equation into the second equation gives the following:

Thus, this is the performance vector of the system:

This vector contains the performance of employees at any given month of tenure. The contribution of any group of employees to the overall customer satisfaction of the system is the number of employees in that group, multiplied by their performance. The vectors of tenure and performance have been constructed such that, when multiplied, they will result in the call center’s employees’ average “percentage top box customer satisfaction.”

Examining the Results

The model ran through the 72 months, or iterations, under numerous different starting conditions. No matter how the starting conditions were manipulated, the shape of the curve describing the performance of the 100-agent system looked exactly like the graph below.

Performance of the 100-Agent System Over 72 Iterations

Performance of the 100-Agent System Over 72 Iterations

Clearly, the system improves rapidly at first. This is because the coaching effect is strongest for new agents. After about a year, however, the system stopped improving at a level well below the system maximum because the effectiveness of coaching diminishes for experienced agents and the turnover eats up the effect of improvement from coaching.

This result is also reflected in the distribution of the performance of individual agents in the system. A negatively skewed distribution took shape after about 12 iterations of the system and remained stable through all 72 iterations. There is constantly a big group of high-performing, experienced agents and a trailing group of new agents growing in experience and performance. Because experienced agents continue to quit and are replaced by brand new agents, the shape of this distribution never changes.

One could argue that the coaching investment is holding the system in place and keeping it from regressing, but it is, without question, not improving the system; after about 18 months, there was no discernible increase in system performance. Although coaching may be helpful for the new agents, the system’s performance rate stays the same due to continued turnover.

Improving System Wide

The conclusion from this modeling effort is that coaching in systems with a broad mix of tenure and even a modest level of turnover will have little effect on the performance of the entire system. To improve the outputs of a system, managers must find an approach to process improvement which lifts the performance of all the agents at the same time, not one at a time. Task consolidation and prerecorded call flows are two options.

Task consolidation involves studying what agents do and looking for ways to make what they do on the phone more accurate and efficient. An example: Finding a way to turn a process that involves cutting and pasting, and opening 15 different systems into a one-click step that causes all the work to be executed behind the scenes.

Another systemic improvement strategy is to engineer what the agents actually say during the call. In this approach, call flows are built to respond to customers’ needs and inquiries and executed by agents using both prerecorded audio files and their live voice when needed. This solution can help eliminate accent barriers, deliver greater process adherence and achieve significant reductions in talk time, without the need for monitors to listen to calls.

Both solutions involve actually engineering and error-proofing sections of the call. These are the type of approaches that can transform a manufacturing environment, and they can do the same for a call center.

An Old Lesson

Although the lights went out in Vaudeville a long time a go, one of the standard gags was about the guy looking for his keys under a street lamp. Another guy stops by to help and asks, “Where do you think you lost them?” To which the man replies, “About a half of a block away over by my car, but the light is better here.”

To improve live call handling, start looking where the keys are likely to be (system-wide improvements), not where the light seems good (trying to improve agents one at a time).

Acknowledgement: The author would like to thank Andrew Pyzdek, who did all the mathematical modeling work described in this paper. Pyzdek is an engineering student at the University of Arizona. He can be reached at andrew@pyzdek.com.

Register Now

  • Stop this in-your-face notice
  • Reserve your username
  • Follow people you like, learn from
  • Extend your profile
  • Gain reputation for your contributions
  • No annoying captchas across site
And much more! C'mon, register now.

Leave a Comment



Comments

Dennis Adsit

Not surprisingly, some have taken issue with this paper, but recently someone took issue in writing. You can read their response here: http://www.elkindgroup.com/coaching/is-coaching-a-waste-of-time/

This is an important discussion for the whole call center industry. Please join the conversation.

Sincerely,

Dennis Adsit

Reply
Dennis Adsit

I write all these deliberately provocative pieces and send them out into the ether. They are mostly (and some would add “rightly”!) ignored. But finally someone disagreed…in writing…to my Futility of Call Center Coaching piece. I was as happy as the scientists at SETI (Search for Extra Terrestrial Intelligence) finding a patterned response amidst the intergalactic noise!

You can read my original article here: http://www.isixsigma.com/operations/call-centers/futility-call-center-coaching/ The article has a lot of math in it, but I summarize it in English below.

And you can read the response here: http://www.elkindgroup.com/coaching/is-coaching-a-waste-of-time/

NICE sells a type of call recording equipment that is the cornerstone for most monitoring and coaching efforts. So it doesn’t surprise me that they wouldn’t take lying down the pillorying I gave to the go-to solution they enable.

And I love that someone did respond, not because I think their response is correct, but because I love the chance for dialogue about this important topic. Given that coaching is expensive and given that it is the go-to method most call centers are relying on to improve agent metrics, isn’t it important that we can prove we are getting a return on our investment?

I actually agree with a couple of the points they make, but overall, unfortunately, Andy, Rob and Corey have provided no evidence in their response that proves there is an ROI from coaching in call centers. Many of their claims are, in fact, completely wrong; others I’m afraid reflect a poor understanding of effective approaches to operations improvement.

Here is a paraphrase of their main point in the first three paragraphs of their response and really their main point overall: “we have experience and data that shows continuous improvement from coaching, but it is inappropriate to share client information.”

As an aside, they actually say “a comprehensive program of performance management supported by behavioral coaching.” I am not exactly sure what that means but am guessing it has to include letting go the bottom x% based on performance. This adds in a completely different dimension which I did not argue against or model in my original article. I am going to ignore the PM part and keep the focus here on their defense of coaching’s ROI, which is what my paper was about.

Speaking of what my Futility of Call Center Coaching paper was about, let’s review that. First, and most important of all, I never said coaching couldn’t help an individual agent improve. It would be easy to produce example after example about individual agents helped by coaching. In fact, this country and probably the world has a love affair with the power of a coach to improve performance and change lives. And you know what? Rightfully so.

My original paper was just a simulation…a mathematical model of what happens when we “add” a little bit to the performance (I called that add “coaching”) of each “employee” each “month” and then some of those “employees” who we just “helped get a little better” “leave” (at various turnover rates typically found in US and overseas call centers) and we replace them with new and by definition lower performing employees.

There were no real employees. There was no real coaching. No one really quit. It was a simulation.

But what happens in a system with parameters like that? The answer: the turnover eats the monthly gains in individual agent performance and center wide performance quickly plateaus, just like it does in the call centers the KomBea management team has run and are currently working with. If performance in a call center is not continuously improving, then the “coaching” (or whatever is “adding” to the performance of individual agents that are constantly leaving) can not have an ROI because while there is a lot of “I” there is no “R.”

Back to their main critique. First, I understand the need to protect client information. While they don’t need to share information on the clients, it sure would be good to see the data itself, because, as I mentioned, the principals in my company, KomBea, have the opposite experience. We have worked with hundreds of call centers, including running our own, and we have never seen continuously improving anything when it comes to agent performance.

Now defining terms in any argument is important. By “continuously improving,” I mean over many months and years, just as you would expect in a manufacturing environment for measures like defects or throughput or changeover cycle time or scrap or WIP. Go into a world class manufacturing environment and those measures are all continuously improving…over years and even decades. People lose their jobs in Manufacturing if those measures aren’t improving. This is what we should be solving for in call centers. If by continuous improvement, the authors mean over a few months or for a “season,” heck that is just as likely learning curve improvements that would have happened without coaching.

Please guys, show us the continuously improving data you have…masking it in any way you need to protect your clients. And more important, 1) show us that the continuous improvement is over years, not a few months, 2) demonstrate the improvement is due to coaching and not other system wide changes or learning curve improvements that would have happened without coaching, and finally show us 3) that the benefits of that improved performance were worth the investment in terms of recording software licenses, monitoring personnel, and off-phone coaching time (which has to be paid for with extra agents on the phone covering for the agents being coached to maintain service levels) that is the real cost of getting any improvement. I’ll eat my hat if they can.

The authors go on to make three specific points to attempt to refute my claim that coaching has no ROI. The first part of their first point is that coaching and process improvement are separate activities that both add value.

Here we are in complete agreement…they are separate activities. I know process improvement adds positive ROI and I will show an example of that below. My point however is that I don’t think the way most call centers are coaching agents has positive ROI.

Moreover, though PI and coaching are distinct solutions, real problems can arise when we apply the wrong improvement approach to a particular problem. Because the go-to method is coaching, center management often fruitlessly throws the coaching solution at problems that would be more effectively solved with process improvement. When all you have is a hammer, you treat everything like a nail and just whack away. Whacking away is generally not a great strategy unless you happen to find yourself hiking in a jungle.

Their second paragraph under point number 1 is where I really take issue. Here is their claim: “Monitoring is the best way to ensure that front-line agents are adhering to business processes and executing them in the manner that was intended.” This statement could not be more false and reflects the worn-out mental model that the whole call center improvement industry rests on…that the best way to improve the overall performance of my center is by trying to improve the performance of each individual agent one-at-a-time. This mental model is wrong.

Before I share an example, consider this: do they run manufacturing operations by having an army of inspectors videotaping each worker and then coaching them one at a time? In manufacturing they got rid of all the inspectors decades ago and moved to baking quality and process adherence into the process itself through the use of automation, error-proofing, Andon lights (to signal process problems), etc. If videotaping is not the best way to achieve process adherence and efficiency in a complex operation like manufacturing, why is it the best way to do it in a complex operation like call centers? That’s my point. It’s not.

Let’s get into a real life example. Many calls in the call center industry have a disclosure component…something that the agents must tell the customers every single time they get one of those types of calls. This is often a key component of call quality metrics. The authors claim that monitoring is the best way to ensure the process is adhered to and the correct disclosure is provided. I disagree. The best approach is to know the disclosure is done correctly without any checking of any kind being necessary.

To this end, disclosure compliance is best accomplished with agent-assisted automation, pre-recorded audio and error-proofing. This approach results in a process where the call cannot advance or be completed until the proper statements are read to the customer by the system and the customer acknowledges having heard and understood them, which is what you want to happen. When that step is completed, then the agent can process the order and complete the call. Done in this fashion, that portion of the call is always 100% correct and no monitoring or inspection or coaching is needed.

We are currently working with one of the largest financial institutions in the world. Disclosures are critical-to-quality in the financial services industry and failure to do so is punishable by stiff fines and seriously bad PR. In many industries…financial services, healthcare, etc…less than 100% disclosure compliance will soon be unacceptable.

Our client told us that for years and years the best they have ever been able to achieve is 92% compliance and that level of performance was rare. Most of the time, compliance was in the high 80%s and was most certainly not continuously improving. This despite the typical huge investment in recording software seat licenses, an army of monitors, and regular off-phone coaching time for agents.

Are Andy, Rob and Corey really suggesting that our client should spend more on recording software, hire more monitors, and take the agents off the phone for longer to try to get past 92%? Are they arguing that all the coaching investment over the last several years which could never produce results greater than 92% has had a positive ROI? Are Andy et al. arguing that our client just isn’t doing it right and if they let more low performers go, the next replacement crop of agents will be able to perform better? Are they really arguing there is a positive ROI in here somewhere?

From the first day, from the first hour, in fact from the first call this financial services giant took using our ExactCall solution, the disclosure compliance rate was 100% and it has never dropped below 100%. Between-agent variation on disclosure compliance has been completely eliminated. Read that sentence again. This was achieved without monitoring a single call. In fact, the use of our technology has cut monitoring costs in half because the monitoring team never listens to the disclosures anymore…a significant part of every call…because they are always exactly correct.

I keep blathering on about ROI so here is how our client calculated the ROI on this process improvement investment:

1) the value of fines avoided (they know what they have paid and what they would pay with out the improvement…going forward, they would be able to see the fines other financial institutions are paying who don’t use the technology), plus

2) the savings from the reduction in monitors, plus

3) the savings from the reduction in extra agents needed to meet service levels due to less off phone coaching time, plus

4) a reduction in training time (since you don’t have to teach the agents how to do the disclosures), plus

5) a reduction in AHT (pre-recorded audio is generally faster and is not challenged as much by callers), plus

Add up all those tangible financial benefits and subtract the cost of our solution. The payback period is less than six months. Everything after that is gravy. I am still waiting for someone to layout the ROI case for one-on-one agent coaching that clearly.

Later in their first point, they go on to quote the late quality guru, W. Edwards Deming. They say Deming’s 6th of 14 points was: “Institute modern methods of training on the job for all, including management, to make better use of every employee. New skills are required to keep up with the changes in material, methods, product and services, design, machinery, techniques and services.” They offer this as a defense of coaching.

Before I respond, I just have to say the fact that they quoted Deming delights me. I don’t know if there is a required reading list for call center leaders, but if there is, Deming’s Out of the Crisis is not on it because the way most centers are run leave Dr. Deming spinning in his grave. So to see that there are some call center experts who are even aware of Deming’s 14 points gives me hope.

Onto their point: I never said not to train employees. They obviously have to know what to do. But also Deming’s point explicitly says new skills are required to keep up with a host of changes on a variety of fronts.

Here is the difference between what my idea of training is and what I think their idea is: I want to train the agents how to use software that ensures the call is completed correctly.

It sounds to me like Andy et al. would like to train the agents and hope they remember what they are supposed to do and hope they do it correctly at the appropriate time, but then, knowing people aren’t perfect, they would like to buy a bunch of monitoring software, hire a bunch of “inspectors” because that is what monitors are, take the agents off the phone, show them what they are doing wrong, and then put them back on the phone and hope they do it right, and then when they get tired of repeating the process they will apply their “comprehensive program of performance management” (read as “fire”) to the employees, then they will recruit, hire and train new ones and hope they work out better.

Can you say Myth of Sisyphus? Moreover, there is a lot of “hope” in their approach. Hope is not a strategy. Nor is the word hope to be found anywhere in books, articles or descriptions of the Toyota Production System, one of the most powerful improvement methodologies ever devised for complex operations.

On the way to looking up Deming’s point #6, which I feel Andy et al. have misinterpreted, I am curious as to why they skipped over Deming’s point #3: Stop depending on inspections.

Inspections are costly and unreliable – and they don’t improve quality, they merely find a lack of quality.
Build quality into the process from start to finish.
Don’t just find what you did wrong – eliminate the “wrongs” altogether.
Use statistical control methods – not physical inspections alone – to prove that the process is working

Andy et al.’s second of three points is that coaching reduces turnover. Here again, I actually agree with them. Coaching probably does reduce turnover. Knowing your manager cares about you and wants you to get better should result in wanting to stay longer.

I argued here that because call center agent jobs were so repetitive, tiring and stressful that efforts to make the job easier (process improvement) were more likely to reduce turnover than efforts to make the environment nicer (coaching is one example, but there are a lot of things call centers try to do to “spruce” the place up…some are actually quite comical…see the article for examples).

But, that said, let me give them the benefit of the doubt for a second. In fact let me extend Andy et al.’s argument: it is actually quite possible that coaching reduces turnover so much that the ROI from coaching all comes from turnover reduction and the benefits of any performance increases they are able to get from coaching are all upside. This is an empirically testable question and it should be investigated.

However, for now, my article mentioned that call center turnover in the US averages 36%. All or certainly almost all those centers have coaching programs for the agents, and still the turnover rate is 36%…every seat turns over in less than three years. And this should not be surprising: who would want to make a career out of being a call center agent? On a good day it is very repetitive and exhausting talking to people all day long and trying to stay positive. On a bad day, with frustrated customers literally yelling at you and saying things you taught your kids not to say, it is extremely stressful.

In popular overseas locations for call centers like the Philippines and India, it is even worse. Call center turnover offshore is routinely north of 100%. In fact, it is not uncommon to find turnover approaching 150-200%. Now of course, they monitor and coach their agents offshore as well, but still their centers can completely turnover twice a year. Are Andy, Rob and Corey arguing that the coaching these domestic and offshore centers are doing is reducing the turnover from the even more stratospheric levels it would be if they didn’t coach? Hmmm…OK.

The effects of coaching on turnover and the potential ROI of that is a different article. My article was meant to highlight the corrosive effects of turnover on call center performance. Should you pour resources into coaching and trying to improve your center one-agent-at-a-time or into process improvement that lifts the performance of all the agents?

When you pour improvement dollars into a resource that is likely to walk out the door in less than a year (outside the US) or in less than three years (in US-based call centers), isn’t it going to be really tough for that investment to show an ROI? All that experience, training and coaching (read as “better performance”) quits and you start over with inexperienced people (read as “lower performance”). When you pour resources into process improvement the improvements remain even when (not if) the agents leave. More on this investment tradeoff at the end.

Their final critique of my article was that long-tenured employees can and should continue to improve. They didn’t like that I built a model with an assumption for a declining effect of “coaching” for longer-tenured “employees.”

I won’t argue with them on this. I explicitly stated the terms that went into my model so that researchers could build their own models with their own assumptions to see under what conditions one-agent-at-a-time improvements in high turnover environments pay off. To those researchers that want to replicate it: good luck coming up with model parameters that 1) show an ROI and 2) even remotely resemble call center environments here on Planet Earth.

But I do take issue with their second and fourth paragraphs under their third point. In their second paragraph they say, “These call centers…must continually raise the bar of performance expectations at every level of their organizations…In an organization that is continually raising the bar, a solid performer who doesn’t continue to improve becomes a poor performer who doesn’t have a job.”

This is tough talk. Let’s go back to our client, who desperately needed perfect disclosure compliance, yet averaged in the high 80’s. How would they go about raising the bar of performance expectations here and how would that help?

I hope what they have in mind would not look like this obviously ridiculous example: no one is running a 3 min 30 sec mile, so should we raise the bar to 3 mins 15 secs? Or should we keep the few guys who are under 4 mins and cut from the team any one running the mile above 4 mins and go find new guys who we hope will be better so we can show we are being tough with our performance expectations?

When I used to lead Six Sigma improvement efforts, we used to say people are three sigma at best (low 90s% process adherence)…they can never be six sigma, because, well, they are human. You can raise the expectations bar all day…you can publicly pillory the poor performers in the parking lot…right next to the high performers’ employee-of-the-month preferred parking places…and it won’t make an ounce of difference.

A lack of understanding of “system” performance…what it means and how to improve it…is reflected in their final paragraph, “Our clients often find that it’s by focusing on their third quartile that they can maximize the overall impact of their coaching efforts.”

What does the third quartile of agent performance represent? The performance of the agents between the 50th and 75th percentile in terms of performance on some measure. In our example, it might have been agents who were hitting disclosure compliance between 84% and 88% of the time (the average was 88%). This is well within the normal distribution of performance of this “system” and would actually be one of the worst places to focus your limited process improvement dollars and efforts.

Deming is known for a lot of things, but one of his classic demonstrations was his red bead experiment ( http://www.youtube.com/watch?v=R3ewHrpqclA ). (I really encourage you to follow that link to see Dr. Deming in action!) He had a large bag of white and red beads, white being “good,” red being a “defect.” He would have managers reach into the bag and pull out ten beads and then plot their performance against each other. He would give out “awards and praise” to those who drew the fewest red beads and would “reprimand” those that had too many red beads. Those getting scolded would howl in protest saying there was nothing they could do…it was “luck of the draw.”

That was Deming’s teachable moment and, if it is possible for an 80-year old to pounce, that is what he would do. He would say, you can measure your workers all day, but the fact remains that you have a “system” (the hiring, training, management, tools, knowledge base systems, compensation, measures, dashboards, etc) that is producing a performance average and a distribution of performance around that average. Deming said the best thing to do was to try to raise the performance of the entire system, not focus on the performance of each worker.

The only place it makes sense to focus on individual worker performance was at the extremes: those statistically better and those statistically worse. That is why Deming recommended the use of p-charts and other control charts and statistical techniques to make sure you were distinguishing signals from noise (see Deming’s Point #3 above).

Those statistically better than the rest of the floor need to be studied. What are they doing? How are they able to be so much more effective? Have they identified a best practice that we can steal and propagate across the floor? Those statistically worse…not worse by some arbitrary line someone with no understanding of how to improve a system pulls out of the air to say “above this good, below this bad”…need immediate focus and help. And if they can’t improve they need to be let go because, statistically, the “system,” however good or bad it currently is, should be able to attract and produce someone who performs better. As for those in the middle, in Deming’s view, leave them alone and focus your limited resources on improving the system.

To conclude, I am glad Andy, Rob and Corey responded. They made a couple great points and I agree with those points. However, I don’t think they offered any evidence that coaching has a positive ROI. I am sure they disagree with this rejoinder and I am actually hoping they will share their data and come out swingin’ against my position here.

But the real reason I am glad they responded is because this dialogue is absolutely essential to the improvement of the call center industry. This industry is terrible! Agents don’t like their jobs (that is why the turnover is so high!), customers don’t like to have to call, and the costs are still way too high (I argued in this paper Call Centers: The Deep and Still Largely Untapped Vein of Operational Profits that no matter what companies have done to lower their call center costs…outsourcing, self-service, support communities…that an additional 40% of the costs could be removed. I stand by that claim). And agent performance is not continuously improving. If call center leaders were held to the same standard that manufacturing leaders are held to in terms of performance and quality, every call center leader in the world would be fired. It is bleak. It is not getting better. How could more of the same be the way out of this wilderness?

You say, I agree…performance is bad and we have to invest in both process improvement and coaching to get better.

I say we have to be smarter than that. Who has all that money to throw around willy-nilly and hope? We can’t let our admiration of the power of teaching and coaching and our love affair with great teachers and coaches blind us to when that solution is cost effective and when it is not. It is absolutely not the solution to improving agent performance in high turnover environments like call centers.

So if we want to improve agent metrics and the improvement dollars are limited (and they always are), the bulk of our investment should be in process improvement. We need new tools and solutions to get the caller’s problem diagnosed correctly every time. We need new tools and solutions to get the disclosures right every time. We need new tools and solutions to correctly handle the warranty claim and to process the Return Materials Authorization every time. We need new tools and solutions to recap the order or reservation to ensure it is correct every time. We need new tools and solutions to make the right cross/upsell every time.

Not some of the time. Not most of the time. Every time. It’s 2013. We have been putting large numbers of people in very large rooms and routing calls to them for 40 years. How can we still think 85% process adherence is OK?

Agent-assisted automation…with its pre-programmed system actions, pre-recorded audio, and error-proofing…holds the promise of being one of those tools and solutions. And we know approaches like this have a positive ROI, no matter how high the turnover is. Only after we feel we have exhausted the process improvement opportunities should we turn to methods with less clear investment cases.

And if call centers are anything like manufacturing, the process improvement opportunities might never be exhausted.

Reply
Andy

Please elaborate on how coaching increases variation among agents. I don’t get that part. Thanks. Excellent article.

Reply
Dennis Adsit

Andy, the short explanation is that when you view common cause variation (most of the variation between agents) as special cause variation, and you “treat” it, you increase variation in the system.

Reply
George Gatzimos

Just a few thoughts. Based on your 72 month study what is the criteria for the coaches? I don’t see any conversations around that. As we know, quality of coaching goes a long way to overall success for coaching. Because the “quality of the coach” varies so much, I think you’d need more than 1 study going at the same time with different coaches. This bring me to my 2nd thought which is what is the consistency of the agents. What was their motivation? Did these agents have the same drive at the end of the study, as they did in the start. And how are we measuring that?

Additionally, what is coaching? What the study is considering coaching could be widely different then from what I am coaching. Or how I am coaching.

Coaching is a very gray area. To slap formulas on to coaching, and then creating a case that traditional coaching is no longer valid seems to putting a square get in a round hole.

Reply


5S and Lean eBooks
GAGEpack for Quality Assurance
Six Sigma Statistical and Graphical Analysis with SigmaXL
Six Sigma Online Certification: White, Yellow, Green and Black Belt
Lean and Six Sigma Project Examples
Six Sigma Online Certification: White, Yellow, Green and Black Belt

Find the Perfect Six Sigma Job

Login Form