iSixSigma

Metrics for Customer Response Centre

Six Sigma – iSixSigma Forums Old Forums General Metrics for Customer Response Centre

Viewing 21 posts - 1 through 21 (of 21 total)
  • Author
    Posts
  • #34451

    Vinay
    Member

    I am currently working on a project to help in defining metrics for a customer response centre. The team is involved in responding to customer phone calls and emails with a dependency on the operations teams. The current performance rate of 95% is very good as per the management expectations and acceptable industry standards. However if this 95% is converted to sigma it is approx. 3 sigma which is not acceptable to managment as the organisation has set a standard to perform at 4 sigma levels for all processes immaterial of the complexities.
    Can anyone suggest a tool or methodology where in we can track the metrics considering 5% as the normal defect rate and thus resulting to 6 sigma performance?
    Any defect beyond the 5% is what we want to track and publish to our customers / management. Please note that I am looking for a suggestion which maintains the data integrity and also provide me all data related to defects to analyse and improve the process continously.

    0
    #95029

    mand
    Member

    I just have one question, who cares what management wants? What happened to what the customer wants?
    Sam

    0
    #95030

    Vinay
    Member

    Sam,
    Thanks for the response. The Customers in this case is the operations who also agree that acheiving 95% is acceptable service levels as per industry standards. If this level is acceptable can we have a solution. In addition there is no denial that Customer needs should be focussed and for that projects are happening to address defects.

    0
    #95031

    mand
    Member

    I guess, I should look at this with a different question. Who are the end customers? This is the point of six sigma, if we can deliver to our end customers to their specifications, we will succeed where our competitors do not, at the same time we will achieve business success as we reduce the cost of quality.
    This is what six sigma is about. Customers don’t care about averages they care about the variation. One of my favorite examples of customer service metrics comes from a book about amazon.com. my dog days at amazon.com. The main character was asked to cut his call time in half, so he hung up on every other call. He got his average up all right.
    What did the customer feel?

    0
    #95032

    Vinay
    Member

    Sam,
    There is no denial that the averages are not the way for any process metrics measurement. However the reality is that in most cases there are two elements to any managment reporting one is to showcase the performance of the team internally / externally and another which we as six sigma teams are more intersted in is to reduce defects.
    In this particular case the second objective is addressed to the best of the abilities considering the infrastructure cosntraints however the first objective still needs addressal and some brainstorming as we cannot change much at the current levels without huge tech. investments.

    0
    #95343

    Sushant
    Member

    Vinay
    Interesting query.
    The way I see itthe sources for setting this targets are just three:
    Voice of the business (management that is)
    Voice of the customer
    Voice of the process
    And the inputs to decide targets should be based on what customer wants but also take into account what industry benchamrks and best practices are.
    To summarize, a combination of VoC and industry benchmarks is the best way to set targets.
    Sushant
    [email protected]
     

    0
    #95345

    Ronnie Jones
    Member

    Performance at 95% is defined as which contact center metric:
    Common  ontact center metrics: Service Level, On time to respond, On time to resolve, Accuracy of Answer, Customer Satisfaction, Seat utilization, Agent efficiency, cost per contact, etc?
    And are your targets benchmarked against world class contact centers such as CVG, Harte-Hanks, RMH, etc.

    0
    #95346

    Vinay
    Member

    Sushant / Ronnie,
    Let me paraphrase the help required. I have a set of metrics which is defined for the Customer Response Centre and is in line with your suggestions. We track and want to continue to track as Sigma scores instead of the percentages which most in the industry like to follow.
    We are acheiving 95-96% each month against standards set by customers but if this data is converted to Sigma value it is lower than 4.0 sigma. The organisation has set 4.0 sigma as the min. acceptable level for all processes including this process. I want guidance to convert the 95% expectations level as base for the sigma calculations any deviation beyond 95% should only be taken as defects. If the 95% targets are met it should equal to 6 sigma. Any defect beyond 95% should be taken as a defect.
    Hope the above clarifies more.
    Is there a way by which the 95%

    0
    #95349

    Rohit krishna
    Member

    Hello,
      I beleive you are into customer care and the answer rate being only 95 percent which comes to around  3 sigma. Irrespective of the type of support you do whether its a voice based or a non voice based service,you need to first cut down on the actual handling time spent with each customer. This would include both the pre call work and post call work. Also need to identify about your staffing capabilites whether there is enough man power available during peak hours and vice-versa. Also need to ensure whether the training is uniform and everyone in the floor is on the same page(process knowledge wise).    It would also be beneficial to see whether there is any kind of re-work or duplication being done. like for example.                                        
    In a voice enabled service,asking repeated information or failing to make a note of customer’s issue in the first attempt itself  and avoiding the need to ask the same question, would not only improve your productivity,but would also enhance customer satisfaction and more over you are only getting more work done with the same staff count.
     

    0
    #95361

    Kumar
    Participant

    Very Simple.  The Sigma value is a measurement of reality vs expectation…So if your customer (management is one of them) would like only 95% and you give them, you are better than Six Sigma.  You might however question the goals/expectation though.  Above is only a rporting things..that fact will still remain that there is 5% room for improvement.
    There are a lot of arguements one can make for or against the above statement and I would rather get involved.  You can analyze the applicability to your situation and suitability of its applicability.  -ravi
     

    0
    #95364

    ProjetosDigitais Adriano
    Participant

    Regarding your affirmative “We are achieving 95-96% each month against standards set by customers but if this data is converted to Sigma value it is lower than 4.0 sigma. The organization has set 4.0 sigma as the min. acceptable level for all processes including this process. I want guidance to convert the 95% expectations level as base for the sigma calculations any deviation beyond 95% should only be taken as defects. If the 95% targets are met it should equal to 6 sigma. Any defect beyond 95% should be taken as a defect.”
     
    I do believe that you are making a big mistake.
     
    You can’t change the 6 sigma reference to 95%. This is a big concept mistake.
     
    95% Yield equals to 3.14 sigma.
     
    6 Sigma equals to 99,99966%.
     
    Then if you want to drive efforts toward a SIX SIGMA level in Customer Response Centre you must look for achieving 99,99966% of standard metrics.
     
    I was the first member of Intelligence Group for the largest fixed operator from South America / Brazil and with key responsibilities for Multimedia Contact Center. As an specialist in SIX SIGMA, IT, Telecom and Customer Care, my rule was sizing projects for attending the strategic goals of performance together customers and government agency.
     
    One of the Key Performance Indicator was the average time of response (ATR). We had a 93% score. However, for a near 400.000 calls per day we loose near to 28.000 calls per day.
     
    The ANATEL set the standard for 95% and we set the standard for 5.219 sigma level, because with this sigma standard we would loose only 40 calls per day (99.99%).
     
    The first set of results dropped down to 97% exceeding government exigency, however far away from the internal desired results.
     
    Regarding the business opportunities of such calls the Higher Managers (president and others) understood that this meant more earnings for the company. Then the investment in SIX SIGMA was paid by their financial results.
     
    Our complete KPI mapping was about 300 indicators mapping all the end results (interfaced with customers) and support process. This is a very detailed approach that you can not cut with a scissor. You need to go down to the root if you really want to do a highly consistent Six Sigma approach that must be permanent.
     
    In some flexible projects I adopted the best practices of SIX SIGMA, the concepts, some tools and methodologies but oriented to 99% or 95%. Those are not formal SIX SIGMA projects. But they give results to the business and prepare the mind of higher administrators for formal six sigma projects.
     
    I do believe that this is your point: first make the higher administrator believe in SIX SIGMA with internal results and after adopt formal approach to 6 sigma level if the business supports such investment.
     
    *** Do you know the Microsoft Six Sigma Accellerator Solution ?

    Adriano Barbosa – [email protected] Brazil.======================================

    0
    #95377

    Vinay
    Member

    Ravi, Need some more explanation (if it is very simple) with the examples and exact methodology to do it were we do not get into data integrity issues.

    0
    #95380

    KBailey
    Participant

    Forgive me if I’m missing something here, or if I sound a little harsh.
    Isn’t part of the point of Six Sigma to to stay ahead of the curve? Why on earth would you ever want to settle for “industry standard?” Do you WANT to go the way of the dinosaur?
    Standards and expectations change over time. 95% may be good enough for now, but when your competition starts achieving 98% consistently, 95% won’t be good enough.
    Management may be shooting blindly in arbitrarily deciding all processes should be 4 sigma, but that doesn’t excuse twisting Six Sigma into a tool to justify complacency. Don’t settle for the standard, SET the standard. That’s what’s known as “competitive advantage.”
    k

    0
    #95397

    John J. McDonough
    Participant

    k
    You can’t “stay ahead of the curve” if you don’t know where the curve is.
    Before you can make realistic targets, you need to understand how good do you need to be.  If you are in an industry where the norm is 7 sigma, then a target of 5 sigma isn’t going to cut it.  Certainly, you don’t want to take your benchmarks and blindly aim for the mean.  You want to understand where your competitors are, and then make an informed judgement about where you want to be.
    The quality guys wold like you to believe that every quality improvement pays for itself by eliminating COPQ.  That is absolute horsehockey.  Many do.  Certainly where you have wasted material, effort, capacity, etc. there are opportunities to make some money from COPQ.
    But when you look at the market, the only way to understand the opportunity cost is to understand the market.  Without the benchmarking, you can’t understand the market.  If the market is huddled around 4 sigma, and I think I can get to 5 sigma, then there may be money to be made getting there.  But if I’m already at 5 sigma, I may not be able to gather any market share by going to 6.  In fact, if I’m going to have to raise my prices as a result, it may be a loosing proposition.
    In Six Sigma, we are told to “follow the data”.  We all serve some sort of market, and the benchmark gets us data about that dimension of our product.
    –McD
     

    0
    #95403

    KBailey
    Participant

    John,
    You’re right, but did you read the orginal question? We know where the curve is. The benchmark was given by Vinay as 95% or about 3 sigma. As near as I could tell, Vinay’s question amounted to, “How can I redefine 6 sigma to equal 3 sigma, because we only care about preventing defects over and above our competition’s defect rate?”
    I acknowledged that management may be misguided in arbitrarily setting the 4-sigma standard. They probably didn’t apply Six Sigma in making that determination. Vinay’s assigned project isn’t to improve the strategic goal setting process – it’s to improve quality of service in the Customer Response Center. Maybe there isn’t even a cost-effective solution at this time to achieve improvement to 4 sigma.
    However, fudging the metrics in order to report an inflated sigma level will lead to entrenched mediocrity. Vinay should stick with legitimate process quality metrics, identify possible solutions for improvement, and present the options to management with the best cost/benefit information possible. Let management decide whether it is appropriate to hold to or back off their 4 sigma objective in this case.
    Even if we know where the market is now, we don’t know how soon the competition will improve to 4, 5 or 6 sigma level, nor how long it will take for customer expectations to become more rigorous. If the historical data is available, we might be able to see a trend from past expectations to present, and extrapolate. That is, we could see how the standard progressed from 80% to 90% to the current 95% standard. Still, extrapolation is imperfect. Management will ultimately have to make a best guess about what’s going to happen, factoring that into the decision.

    0
    #95407

    John J. McDonough
    Participant

    kbailey
    I have to admit that I was responding more to a long string of posts suggesting that benchmarking was the road to mediocrity than to Vinay’s original question.
    There are several things going on here.
    Our sigma level depends entirely on how we define the defect.  If I manufacture bolts, and the specification says that a half-inch bolt can be no less than 0.49 inches, then a bolt that is 0.495 inches is in spec and isn’t a defect.
    Vinay has a spec that says 95% of all calls are handled in some way he doesn’t describe.  Apparently, 95% is his specification.  Assuming he does this calculation daily, then he is at six sigma if he fails to meet this 95% specification no more than one day every 805 years.
    Now you and I may think that this is a loose specification for a customer call center.  And it may well be, but although I have an uneasy feeling about that 95% number, I have no data on which to base the assertion that it should be higher. (You seem to think it should be 99.9997 or so).
    The way to get that data isn’t to simply say it has to be this way.  It is to look at a number of things. What are my competitors doing?  If they are all at less than 90% will I gain any market share by getting to 99%?  In my industry, is there any advantage to being good at this call center thing?  Maybe the 95% is an indication that I’m spending too much money at this.
    I need to read the benchmarks in the context of my VOC data, of course.  It could be that my customers expect 75%, and my 95% is a real delighter.  Getting beyond that is not going to make my customers measurably happier.
    On the other hand, as others have pointed out, it could be that the 95% is indicative of process inefficiencies.  If I get rid of those inefficiencies, I may be able to cut my costs, and as a side effect, get better than 95%.  That may be, but again, it’s not simply a consequence of asserting that my spec should be 99.9997 or any other number.
    Now you have correctly pointed out that there may very well be competitive data incolved in the 95 in the first place, and it would be foolish to assume that the competition is sleeping.  It may well be that Vinay should bring this issue to his management’s attention.  But the question of whether or not the spec should be 95 is a separate question from whether or not he is meeting the spec.
    If the spec is 95, then the spec is 95.  It’s not in any way “cheating” to treat the spec as if it were the spec.  Now Vinay may think the spec should be 99.  Fine.  He should collect the data and go to management and explain how they are going to make money by changing the spec to 99.  But I think you said that.
    –McD
     

    0
    #95408

    KBailey
    Participant

    I believe the situation we’re dealing with is one in which the paying customer is satisfied with a defect rate <= 5% for the consumer, who most likely is their end customer. For the process of handling calls, the process customer isn't the paying customer, it's the consumer.
    In outsourcing arrangements, you must consider the specs of both the paying customer and the end consumer, or you AND your paying customer will eventually lose out to the competition.

    0
    #95427

    Jonathon L. Andell
    Participant

    Maybe I missed something here, but 95% of what? Don’t we need an operational definition of what is being measured, and how? If it was in fact defined and I overlooked it, please forgive my lapse…

    0
    #95622

    Jim Braun
    Participant

    Vinay,
                    
    It seems you’re asking some very general questions requiring very complex and specific answers.  If you can give some further information to clarify some of your generalities, I may be able to assist.
     
    (1)   Please define the key customer process you are referring to.
    (2)   Please define the current metric you are tracking.
    (3)   Please define a defect for this process.
     
    In the meantime, allow me to offer some insight from my experience with managing quality and customer satisfaction in call centers.
     
    I have found there are two types of defects in servicing customers: (1) Defectives, which are defined as anything that occurs that causes the end-user (consumer) to call/email again or to discontinue doing business with the company, (thus the whole call is considered defective) and (2) Defects, which are annoyances, such as misspellings, (thus, the call/email has a few dings and scratches, but  met the serviceable attributes of the transaction) etc.  However, multiple “defects” can turn into a “defective.”
     
    Defectives are defined as inaccurate answers given to the customers, an extreme soft skill breach (cussing at customer, hanging up on customer, etc.), lying to a customer, doing something illegal, and/or not meeting the end-users needs as communicated by them.  (You need to define for your customer, “What would cause them to call/email back because they didn’t get what they needed, or would cause them to stop doing business with you?”  However, I have found this to often be set at the beginning of each transaction by the customer.  You can define the 80 percent that call/email, but the other 20 percent are too specific to the individual customer.  Therefore, we defined the “customer specification” as set by the customer at the beginning of the call/email [i.e., I need to change my order from …] and that set the “defective” specification for the call/email.)
     
    Defects are usually all the other attributes we normally evaluate.  These are found within categories such as professionalism, listening, communicating, opening, closing, etc.
     
    In a service industry, most attributes are not absolute, except whether data was correctly input into a system, or whether words are misspelled.  However, most attributes that can be evaluated with an absolute “go/no go” are not going to cause a customer to stop doing business with a company. (The exception to this is if you set a customer’s expectation and you do not deliver on it.  [i.e., “I will call you back tomorrow at 2pm to see if you received your CD” or Improperly inputting an order and the customer does not receive it, etc.])
     
    In my experience, trying to achieve customer satisfaction in a call center environment is too varied and individual to each transaction to be able to track anything with the precision intended by six sigma science.  However, I have found ways to achieve success using six sigma at the theoretical level.
     
    Please bear with me a little longer.  We first had to reduce variation.  For us, variation was defined two different ways; (1) an agent’s average handle time (AHT), and (2) how each agent performed the process as defined.  The first, (AHT) is fairly easy.  Collecting the time each agent is on the phone (talk time + hold + wrap time) for every call, then average it per week.  The second (performance of the process) required that we mapped the process and fully understood the policies and procedures that controlled the process.  We then had to do several assessments (sitting with several random agents) to see if the process was actually followed as designed. 
     
    (Side Note) As time continued, we used our quality team to monitor the process of agents to separate between poor process performance caused by (1) training deficiencies, and (2) agent discipline issues. When a remote monitoring confirmed poor process performance, we would follow up with a side-by-side.  If they agent, on their best “behavior” still performed poorly, we could categorize the problem as either training or individual capability issues – and offer assistance.  If they could perform the process when “supervised” (side-by-side monitored) but didn’t when remotely monitored, we then assumed it was an individual discipline issue and focused coaching on motivation and possibly job enrichment techniques.)
     
    We next took the weekly average handle times for each agent and put them in a data set for the team (keeping the names attached to the average handle times).  When we charted it using a control chart showing +/- 3 standard deviations, we noticed about 30-50 percent of the agents continuously performed below the average of the team (you can use a simple run chart, too).  These 30-50 percent of the agents were the reason for the poor average handle time of the “team” being where it was. (Normally, we would think that each agent’s times would randomly go above and below the average, however when grouped with their team, we found the “consistently poor outliers” could be identified as specific agents.) 
     
    Next, we targeted investigations into each of these agents performance.  We wanted to find out; “Were their long handle times necessary to provide accurate resolutions, etc… or were others who were faster also providing accurate resolutions and the poor handle time performers needed to improve?”  Once we compared customer satisfaction results of this team, we found that “average handle time” did have a slight correlation to the customer satisfaction results.  What we found was against what we believed; we found that agents who were faster actually had better customer satisfaction scores!  Thus, another insight – the customer not only expects accurate resolutions, but they want it as quickly as possible.
     
    Further investigations with the “poor average handle time performers” mostly revealed they had problems working with the software, using proper queries to find the right knowledge base article to assist the customer, shooting from the hip, etc.  Each of the agents’ supervisors developed a plan to work with (or get help for) their weaknesses.  The agent would commit to an expectation (performance level) and date to achieve.
     
    Note: the agent set the level and date, however the supervisor had to ensure it was sufficient.  If the rate of improvement was unacceptable, the supervisor would ask, “what barriers do you see that will prevent you from reaching “x” by “date?”  The supervisor would need to commit to removing these barriers (if realistic) in order to get commitment from the agent to achieve the rate of improvement needed.)  One last note on this; it needs to be understood by the agent and supervisor that improvement “must” happen.  Failure to improve at an acceptable rate could mean termination.  If communicated properly, agents can understand that they may not be fit for this work.  So long as the company and management are sincerely sensitive to making a fun and comfortable work environment, everyone will strive to perform their best.  After setting a couple of improvement milestones over a few weeks of time, if the agent is not “cutting it” they also know this and are preparing themselves for ending the job (or moving them to an easier queue to meet their capabilities).  You will find that terminating employment is rarely, if ever, needed.  When it is needed, it goes much more smoothly and the agent leaves in a very friendly and professional manner.  Who knows, you may need their services later in a different program.
     
    After three months we began seeing an amazing phenomenon occur: Customer satisfaction results were increasing and average handle times were decreasing (negative correlation)!!  In hindsight, we attributed it to a few factors: (1) By identifying the slower performers and giving them the additional training they needed, we better equipped them to meet customer needs, (2) the Hawthorne effect was probably influencing them, too.  We spent a lot of time with them and we were genuinely concerned with their performance and we were genuinely taking actions to help them (NOT punish or threaten them), and (3) because these metrics (customer satisfaction and average handle times) were being reviewed frequently between themselves and their supervisors, there was no doubt what was most important (inspect what you expect!).  (Note: it was not how fast they performed a call, it was how often they could get high marks from their customers coupled with how quickly and consistently they could achieve this combination.)
     
    Final thought:  six sigma calculations and goals must have hard definitions and measures “in” to have confidence in the results. If your customer cannot define a defect, but “knows one when it happens,” it often becomes a soft measure and is best improved by going directly to customer satisfaction metrics and looking for wider-defined areas for improvement (i.e, costs too much, takes too long to fix, wait too long, call back 2-3 times to get it fixed, etc.).  If you can find solid definitions and metrics, they are probably “dissatisfiers” such as “didn’t get me order on time, etc… otherwise, you will probably be the first(other than AHT, Accuracy, Efficiency metrics, Service Level, Backlogs, On-time, Volume, etc.).  However, the amount of time you spend on doing this is time and effort lost on a simpler approach (that I outlined earlier) and will allow you to make improvements faster and at a level well beyond your wildest expectations.
     
    Call center performance is built on both very complex processes, complex measures and complex people.  We tend to continue to try and find more and more metrics to improve and fail “people” that need more one-on-one personal leadership, assistance, or job enrichment than they need another metric to be measured against.  Don’t confuse these issues!
     
    Good luck!

    0
    #95624

    Fin
    Participant

    Jim
    I just read your post and found it very helpful. I am starting the introduction of Six Sigma to a call center. I have a couple of questions.
    1 How did you get the customer satisfaction rating  for each operator, is it a case of random surveys of callers?.
    2 Did you introduce any Lean concepts??
    3. How did you decide what was an acceptable AHT
     
    Thanks
     
    Fin
     

    0
    #95680

    Jim Braun
    Participant

    Fin,
    1 How did you get the customer satisfaction rating  for each operator, is it a case of random surveys of callers?.
    Answer: Our clients used a couple of different processes.  Two of our clients used web-based survey processes that asked the customer to go take a survey after support was provided.  However, since the client had outsourced this to a third party, the raw data was never provided to us (we were told it was a cost issue).  Since the third party summaries did not provide the kind of analysis we needed, the survey results were only a reminder of how we were doing.  It didn’t give us any information that we could analyze to find opportunities for improvement (very much a waste!).  
    However, one of our clients did it right:  One client had a marketing firm perform daily calls to survey random customers within 48 hours of getting support from our site.  They surveyed about 2 percent of the total volume of calls.  Every agent was not surveyed equally.  By randomly selecting calls, the agents with the most calls wuold naturally have a higher chance of having one of their calls surveyed.  Sometimes, this led to one agent only havnig been surveyed two times this month, and another agent being surveyed five times.  However, over a year’s period, they were fairly equal.  The survey results were then screened by the client to filter out incomplete surveys and ensure the agent surveyed, the call queue, etc. was properly identified (sometimes our agents would take calls in a couple of different queues, so they needed to make sure the survey was properly coded to the right queue).  The survey results were updated on a shared FTP site for us to download (an Excel file) once weekly.  The results in the Excel file were then dowloaded into our own SQL database.  We used an Access front end and Excel to extract the data, trend overall sat and dissat results, and break down results into the survey categories (process, product, service, etc.).  A website was created for the agents to view their customer’s ratings and comments.  The website had a few functions for them, such as trending results over 30 day, 90 day, 6 month and 1 year time periods.  It also showed them where the team’s average was so they could compare their performance to their peers average.  (At that time, we called the webpage “MyCustomerSat.”)
    One note: Often times, agents feel like a survey didn’t really apply to them since the customer was upset over the product, not the service they provided, or the customer called in twice, and the first agent really upset the customer, but the last agent (who was surveyed) provided excellent service.  We countered these arguments by not attaching any threats of employment to customer satisfaction results and all of management spoke in a single voice; “The custoemr doesn’t keep trak of who did what and when.  When they have a bad experience, it is with the company (not a specific site, or with sales, or with “x”).  If you get upset over a poor survey, get upset that the whole experience (product, service, process, etc) is still in need of improvement and see if there is anything within the surveys comments that can be used for us to improve.  Pass it on, make a suggestion, but don’t feel like it’s a personal critique (unless you know it is targeted to your service).”
    2 Did you introduce any Lean concepts??
    Yes.  Standardization, however it had both successes and failures.  I can’t say it was due to not having the CEO’s full backing.  The CEO was what we called a “quality-geek” and drove quality ever time she spoke.  In hindsight I can only guess what we did incorrectly, but I may still not have the full insight. 
    (1) We needed to provide JIT training to a senior leadership group who would design the standardization process the company would use (people support what they help to create).  Instead, they left it up to a hand full of us quality-guys, and though we attempted to train what we came up with, it was not supported well because I believe most believed it was being “thrust upon them” since they did not help to create it.
     (2) We needed to develop formal JIT training for the standardization process, then have the training group do a live process using what was taught, and do it over a series of a few hours over a few weeks (everyone’s got a “day job”).  Instead, we tried cramming a lot of theoretical training into them over a couple of hours, do an example in the classroom, and then expected everyone to know, understand and use it going forward.
    (3) Do a better job of implementing the standardized process (better training to people, better follow up [process-audits] and better document control processes to keep up with changes).  Though we had a very good document control process, most people were so busy with the daily operations that they didn’t even know a document control process existed (again, better training and implementation needed).  We also had a very good and thorough change management process, however, key persons were not involved with creating it, and thus acceptance prevented them from embracing it.
    3. How did you decide what was an acceptable AHT
    Historical AHT’s were initially used to set up the overall goals for staffing and the performance needed for the staffing/scheduling plans to work (and meet net profit goals).  Daily, weekly, monthly and seasonal loads were analyzed for proper distribution adjustments and staffing levels to meet the performance requirements (AHT, Service Levels, etc).  Once you have the initial staffing levels to meet the service level requirements, you can put the system (I described earlier) in place.  The focus will always be on the lower performers, investigating how to improve their performance and driving them to better customer satisfaction and lower handle times.  The focus must be on the investigation of how you can help them become better at the process or operation they perform (better at queries, better at call control, better at …).  The customer satisfaction will go up and the AHT will go down. 
    Initially we heard a lot of agents telling us that they can not go that fast and still have high customer satisfaction.  There were two reasons for this argument, (1) “When I give my undivided attention to a customer, and take my time with them, they are more satisfied,” and (2) “I am meeting my AHT goal of “x” minutes or seconds.  I don’t need to go any faster.”
    Our answer was simple, and in two parts; (1) It can be done faster and still have high customer satisfaction because the “teams” average is faster, and most of your team is doing it – “I’m not asking you to perform any differently than the rest of the team.  So your first goal would be to reach the team’s average”, and (2) AHT goals are based on historical performance.  We developed cost and revenue models that have their root premise in the AHT and volumes.  Company profits (if you are an outsourcer) are increased if we can reduce “costs.”  Reducing AHT (along with reduced availability) and meeting the customer’s needs on every transaction is the most direct contribution you can make to improving profits.  If you meet the customer’s needs on every transaction, the volumes will be reduced, too, saving costs to the client.  The gains you receive in your reputation for achieving cost reductions in support calls/emails, will outwieght the loss revenue by reduced volumes.  Your client will be happy to market your value and not only give you additional business, but help you gain new clients.  (If you are an inside team, reducing AHT is also a way of reducing costs and will keep costs from eating away at revenues already gained.  However, here’s another selling point: “if they can meet the customer’s needs on every transaction, the volume of calls/emails will be reduced, customers remain loyal, and that will also reduce costs.  [Customer loyalty will increase revenues.])”
    The above description of agent improvements on reducing costs and increasing revenues is why I’m an advocate of agent incentive plans.  It doesn’t cost a lot to provide additional incentive bonuses to those achieving the highest performance, to those with the greatest improvement, and to the teams with the best overall performance.  We did this with a “success shares” program that was a key driving force to motivate people to perform their best.  (By the way, it was developed mostly by the agents and supervisors themselves!)

    0
Viewing 21 posts - 1 through 21 (of 21 total)

The forum ‘General’ is closed to new topics and replies.