iSixSigma

Metrics for Call Center Agent Monitoring

Six Sigma – iSixSigma Forums Old Forums General Metrics for Call Center Agent Monitoring

Viewing 30 posts - 1 through 30 (of 30 total)
  • Author
    Posts
  • #31532

    BFR
    Participant

    I’m scoping a project to measure the effectiveness of call center agents in an outbound sales environment (e.g., phone etiquette, compliance)
    I’m looking for metrics to measure how well the agent conducts the phone call.  Can anyone share any best practices?

    0
    #83467

    devashish chakrabarti
    Participant

    Hello  !
    In an out bound enviroment following are the CTQ of any transaction ( here a call):-
    1. Script ( from which the agent reads oot / follows the probing sequence)
    2. Soft skills ( pre and post call closing)
    3. Product  knowledge
    The first step before the actual calibration starts , you need to define the above parameters in terms of “Critical to quality ” points. Eg. If  the agent misses any vital part of the script , he scores ‘0’ in that part. However, in some other part of the script which is “nice to have” you do not score him ‘0’ however, on second pass caliberation you give him bonus points. The point is ….you need to identify , and quantify the CTQs in each and every step involved in the calling process.
    Part 2 : Metrics ( sales) . Based on your operations you need to define the hit rate and put up slab rates with assigned points. With this you can calibrate the agents into different slabs and score them. The most crucial thing over here is to find out as to how / at what stage do you define a hit ( as per your SLA with your client).
    Thus , as an exercise , you need to fullfill both parts ( for an out boind telesales calls) in order to put a measurement system as a process in your call centre.
    In case you need detailed analysis / project feel free to conatct me at [email protected].
    regards,
    devashish chakrabarti 

    0
    #83481

    Tierradentro
    Participant

    I would first start by looking at your Internal Quality Monitoring program. Most call centers (inbound/outbound) have some type of QM program with criteria outlined for what makes a “good” call. If you don’t have one in place, I would highly consider it. Some metrics or criteria I’ve used in the past in an outbound environment include:
    Introduction (representative intro to customer – name, company, reason for calling, etc.)
    Repoire (representatives ability to “build” repoire with customer)
    Pitch (how well the representative “pitches” the product or service)
    Product knowledge
    Response to customer questions (how well the rep handles this)
    Handling customer objections
    Close rate (% of total calls that result in a sale)
    Call closure (representative wrap-up)
    Data entry (post call wrap-up using system to input information – sales orders, etc.)
    If you have any questions, feel free to email [email protected]
     

    0
    #83484

    Juchniewicz
    Member

    There are 3 general areas we assess in our call centers: (1)System effectiveness– i.e. Contact-to-Attempt ratio (2) Quality of Customer Interaction and call documentation and (3) Financial Results.
    Of the 3, #2 is the trickiest.  When assessing the quality of the customer interaction and call documentation, the existing “checklist” of characeristics of a good call is the right place to start.  A big caution…with a list 5 characteristics, passing the Attribute GR&R is tough.  It will probably take a good operational definition developed with the supervisors (or whoever is assessing call quality now) to get a good gage.  When you ask “What makes a good introduction?” etc you will probably end up with a much more detailed list of what to listen/look for in a call and have a much better chance of passing your GR&R.
     
     

    0
    #83495

    BFR
    Participant

    Thanks for the input John. We do have Quality Monitoring in place today, but geared more towards inbound agents as “problem solvers”.  Outbound monitoring is a new path for us.  Good advice.

    0
    #83694

    nancy
    Participant

    If you have an example of how you’ve used an attribute Gage R&R for call monitoring, please share!  I’m attempting to work this into our process and it would help me greatly if I could see a successful model.

    0
    #85568

    Mike Smith
    Participant

    We have a call monitoring process for our inbound customer service call center.
    We have 26 items broken into 3 sections, Call management, Communication and Complete & Accurate information.
    We did a gauge R&R with 10 appraisers on 5 calls -twice.
    The GR&R result was 15% on all within appraisers and 15% for all appraisers vs. standard. Some questions were a bit better than others, some appraisers were a bit better than others, questions that were rated ok/not ok were not better than those rated with ok/in between/not ok, so the upshot is the process does not work.
    It is not working at all. If you want a copy of the mpj send me a request t [email protected]
    Does anyone have a quality call monitoring process that has passed a GR&R. if so I would be interested. plus I would like to know hoiw you got all the appraisers to haer the calls the same way.

    0
    #85651

    simar
    Member

    Call Quality is always a very difficult metric to pass the test of the gage.  What we devised was a proces where 3 appraisers rated the recorded call on the CQ Sheet (The rating Sheet). they sit down again after 2 days to rate the call. and we check which sections.. which operators .. which questions are causing the problems.
    Once identified… do shared listening. we get all the appraisers together in tha room. play the call (3rd Time) and ask them to rate as a team. give reasons for the score given .. and come to a conclusion.
    Post this normally since the operational definitions are clear in the appraiser’s minds the Gage passes.
    But the caveat here is that this calibration excercise needs to happen very often. We do it once a month, to ensure the operational definitions are not changing. And we also have over a period of time develope very robust Operational definitions for the Seciotns and questions.
    Simar

    0
    #85657

    Rishid
    Member

    Hi Mike,
    What I get out of what you are saying is that you are trying to do a calibration exercise. A G R&R is to check the measuring tool. If its calibration that you are trying to do then yes we have calibration exercises working at our call centre.
    Abt your query on how to get all the appraisers to hear the call the same way, I think you can record the call and play it in front of all the appraisers. If I am getting you wrong pls correct me.
    rishi
     

    0
    #85661

    Tierradentro
    Participant

    Quality monitoring is a difficult process, because there is a high degree of subjectivity that takes place. Appraisers will never hear the same call the exact same way, but there are a few steps you can take to bring some objectivity back into the process. First, I would recommend using ok/not ok, and putting some definition behind each of these for all 3 categories, and each question in each category. So, if appraisers are not sure about whether to choose ok/not ok, they could refer to a criteria list for each question that they could check. If 3 out of the 5 or 4 out of the 5 criteria are checked, then the question is rated, ok.
    Second, you can employ a customer satisfaction measurement system, that uses automated surveys at the conclusion of each or every 3rd call, to measure the customer’s perception of the call vs. how the appraisers scored each agent. I found this to work very well, and we continue to use it today to calibrate our Quality monitoring process.

    0
    #85714

    Sumit Taneja
    Member

    The Gage R&R study which you did showed 15% variation within appraisers and 15% for all appraisers vs. standard.
    In any process, variation is bound to be there and it is the law of nature.  In a call center environment, a lot of parameters are subjective which leads to variation in between the appraisers which is okay.  Variation is only bad if it is significantly high.
    In your case, the variation is 15% which is acceptable as per the norms in call center industry.  Your Gage R&R has not failed but passed.

    0
    #85759

    Mike Smith
    Participant

    Sumit
    I believe the Gauge R&R shows it failed.  The 15% for each means 15% agreement not 15% variation.  And the overall Kappa Statistic is .47.
    The Kappa is the ratio of the proportion of times the appaisers did agree to the proportion of times the appraisers could agree.
    Kappa Statistics (from Minitab)   If you have a known standard for each rating, you can assess the correctness of all appraisers’ ratings compared to the known standard.If Kappa = 1, then there is perfect agreement. If Kappa = 0, then there is no agreement. The higher the value of Kappa, the stronger the agreement. Negative values occur when agreement is weaker than expected by chance, but this rarely happens. Depending on the application, Kappa less than 0.7 indicates that your measurement system needs improvement. Kappa values greater than 0.9 are considered excellent.
    Mike

    0
    #85762

    Khare
    Member

    I would appreciate if you send me the mpj file.

    0
    #85763

    Sumit Taneja
    Member

    I have asked the mpj file, i would get back to you once I see the results for my self.
     
    Sumit

    0
    #85764

    Mike Smith
    Participant

    Sumit
     
    For that I need your email address

    0
    #85765

    Sumit Taneja
    Member

    Hi Mike,
    My email address is [email protected].
    Sumit
     

    0
    #85766

    Sumit Taneja
    Member

    Hi Mike,
     
    I see two problems here:
    One can be all are not on the same page with respect to understanding of the parameters on which they are assessing the call.
    Second the Call Monitoring Sheet is too subjective for anybody to infer.
     
    I have done Gage R&R and caliberation sessions n number of times and most of the time when it has failed the operational definition of each parameters were not clear to the participants, otherwise I have never come across when Gage R&R has failed.
     
    Before having a Gage R&R, I always organize a training and discussion session with all the people who would be involved in the Gage R&R so that everybody is clear of the operational definition for each and every parameter.
     
    Sumit Taneja

    0
    #85824

    Mike Smith
    Participant

    Sumit
    What you say is the right way to establish a measuring process. The reality is it doesn’t always work that way. our process was established without looking at CTQs or getting clear operational definitions.
    That said, then we were able (a year after the monitoring was established) to use the Guage R&R and the Gauge results have led management to rethink what they want to do and how to do it since the current process is not working
     

    0
    #85825

    Cooke
    Member

    Hi, Mike –
    We have had some similar struggles with getting our monitoring program off the ground.  To assist in getting everyone comfortable, the staff completing the monitors met weekly to monitor a call.  The scores were shared and open discussion held on differing topics.  Even with all this, there was differences in meeting scores and those monitors performed individually.  We don’t have rigid scripts for our phone agents to follow – our business is one of answering customers quesitons about their mortgage.  As such, there will continue to be differences as in my opinion, this will happen when imposing objective metrics around subjective material. 

    0
    #85891

    Mike Smith
    Participant

    Tracy
    We are looking at the possiblility of using “secret shoppers”. This way the calls in have specific correct answers and then we know whether the CSR answers are complete and accurate.

    0
    #97309

    Linda Arellano
    Participant

    Mike
    I have never been a fan of secret shopping results, however as an alternative to voice mining I can see the viability of this approach to achieve your objectives. I would like to be considered as an independent beta resource to perform and archive these secret shopper calls. I am interested in hearing more about the specific questions and answers you are looking to capture.

    0
    #97342

    Wayne
    Member

    The metrics to measure the effectiveness of call center agents in an outbound sales would be appropriate greeting, smile in voice(plesantness), enthusiasm, voice intonation, audibility, Matches callers pace, mentions usp,  gives correct info, pushes for sale(assertiveness), appropriate call ending.
    Wayne.

    0
    #111266

    dilip
    Participant

    Hi Mike,
    Information provided by you have been of relevance to me since I am just about putting a proper procedure for monitoring my Inbound Customer Care executives. I would be very grateful to you if you could send me a format of monitoring the executives: the parameters and the measurement types.If you have one please forward it to my email rdilipk1rediffmail.com.
    Dilip

    0
    #114011

    Picklyk
    Participant

    Can someone shed some light on statistically valid sampling of call center agents? Assuming 5,000 agents handling 50 calls per week each, how many audits should be performed to accurately measure center-level and agent-level performance?

    0
    #114301

    Mohit Jain
    Participant

    Hi Dev,
    Was going through mails related to sigma implementation in an outbound telemarketing call center.
    Your theory sounded excellent. Would you be kind enough to provide us some detailed analysis regarding the same.
    Also what can should be done if we try to implement Sis Sigma on the basis of Matured sales in an out bound call center
    Regards
    Mohit Jain 
    Email address: [email protected]
     

    0
    #114304

    Amar
    Participant

    I think Dev’s inputs were worthwhile. I think the most important factor that people miss out on is creating a system / workflow for identification and tracking of hot leads (high probability of conversion). In addition to this if there is process that helps in profiling the customer before the call thereby helping the agent to prepare a customized pitch for each profile also helps. In case you need more information on these please reach me at [email protected]

    0
    #163230

    Benel
    Participant

    all the inputs here are very good, in fact i have noted some practices which i can replicate in our center.
    just to add my two cents, compliance to regulatory requirements needs to be monitored during the call as well such as the number of rebuttals an agent is allowed to do.  another relevant regulatory requirement for our center is anti-slamming which is monitored by our quality specialists.

    0
    #163231

    Brandon
    Participant

    For those of you who work in “sales” call centers – what is your percentage hit rate on calls?
    I’ve registered on the do not call list and still get 20 to 30 calls per week.
    I NEVER have and I NEVER will buy anything as the result of a telesales call.  Can’t a number be flagged as a “no buy” so you eliminate the muda of calling me & others like me?

    0
    #163234

    Representing all call centers
    Participant

    Consider it done.

    0
    #163236

    Brandon
    Participant

    Consider – r  i  g  h  t  !!

    0
Viewing 30 posts - 1 through 30 (of 30 total)

The forum ‘General’ is closed to new topics and replies.