iSixSigma

Jim Braun

Forum Replies Created

Forum Replies Created

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #120902

    Jim Braun
    Participant

    This has been a very interesting thread, so I have to add my two-cents…
    I’ve been a passionate follower of a great many quality philosophies and methodologies for the last 17 years, and though I too have my MBA with a specialty in Quality Management, I continue to find out how little I really know as I try to apply the overall philosophy and methods in every-day business practices.
    I believe each of the quality philosophies, methodologies, standards, awards, tools, etc… were designed to strengthen a weakness of that time; within either our national management culture (whether US, Japan, etc..) or a weakness within an industry.  They each have their strengths and weaknesses, and when viewed as a “set,” you can see potential guidelines for a variety of settings.
    As one example, I’ve been working with customer service centers for the last 7 years and have found a combination of methodologies work best in that industry.  We use the ISO9000 standards for documenting our processes, COPC 2000 (call center industry standard based on the MBNQA) for their best practices and benchmark data for call centers, and Six Sigma for our process improvement methodology.
    In my humble opinion, I believe none of them are a “best practice” as a sole methodology, but combined they serve as a reference library for best practices.  I guess if I have to use an analogy, I would look at it like a football team’s strategy for winning a game.  You’re going to find yourself in a variety of situations on the field and the only limits to your potential successes are how well you practice and work as a team, how knowledgeable and motivational your coach is, and of course, how good your game plan is. 
     
    – Jim Braun
     

    0
    #95736

    Jim Braun
    Participant

    Original MBB,
    Six Sigma concepts are not new, though there have been some reorganizing of long-proven and improved quality theories/concepts.  ASQ has been a body of professionals (like many of us responding to your posting) attempting to promote old and new quality concepts for decades.  ASQ has also been recognized as the university accreditation body for subjects in quality (I believe through the PhD level). 
    In my opinion, certification is necessary.  It rewards us for achieving a level of knowledge and success, defined by the criteria to become certified.  It’s a starting point.  Just like a school diploma.  I hope you give some of these certified individuals an opportunity to further develop under your tutelage.  You sound very confident in your knowledge, skills and past successes.  However, there have been many before you practicing in the quality profession; many have done better than you, many probably worse. 
    As an MBB, I’m surprised you only critized the cert process but did not provide the appropriate people your recommendations for improvement (sorry, my quality 101 is coming out). 
    Do you have specific recommendations, or ….?
     

    0
    #95680

    Jim Braun
    Participant

    Fin,
    1 How did you get the customer satisfaction rating  for each operator, is it a case of random surveys of callers?.
    Answer: Our clients used a couple of different processes.  Two of our clients used web-based survey processes that asked the customer to go take a survey after support was provided.  However, since the client had outsourced this to a third party, the raw data was never provided to us (we were told it was a cost issue).  Since the third party summaries did not provide the kind of analysis we needed, the survey results were only a reminder of how we were doing.  It didn’t give us any information that we could analyze to find opportunities for improvement (very much a waste!).  
    However, one of our clients did it right:  One client had a marketing firm perform daily calls to survey random customers within 48 hours of getting support from our site.  They surveyed about 2 percent of the total volume of calls.  Every agent was not surveyed equally.  By randomly selecting calls, the agents with the most calls wuold naturally have a higher chance of having one of their calls surveyed.  Sometimes, this led to one agent only havnig been surveyed two times this month, and another agent being surveyed five times.  However, over a year’s period, they were fairly equal.  The survey results were then screened by the client to filter out incomplete surveys and ensure the agent surveyed, the call queue, etc. was properly identified (sometimes our agents would take calls in a couple of different queues, so they needed to make sure the survey was properly coded to the right queue).  The survey results were updated on a shared FTP site for us to download (an Excel file) once weekly.  The results in the Excel file were then dowloaded into our own SQL database.  We used an Access front end and Excel to extract the data, trend overall sat and dissat results, and break down results into the survey categories (process, product, service, etc.).  A website was created for the agents to view their customer’s ratings and comments.  The website had a few functions for them, such as trending results over 30 day, 90 day, 6 month and 1 year time periods.  It also showed them where the team’s average was so they could compare their performance to their peers average.  (At that time, we called the webpage “MyCustomerSat.”)
    One note: Often times, agents feel like a survey didn’t really apply to them since the customer was upset over the product, not the service they provided, or the customer called in twice, and the first agent really upset the customer, but the last agent (who was surveyed) provided excellent service.  We countered these arguments by not attaching any threats of employment to customer satisfaction results and all of management spoke in a single voice; “The custoemr doesn’t keep trak of who did what and when.  When they have a bad experience, it is with the company (not a specific site, or with sales, or with “x”).  If you get upset over a poor survey, get upset that the whole experience (product, service, process, etc) is still in need of improvement and see if there is anything within the surveys comments that can be used for us to improve.  Pass it on, make a suggestion, but don’t feel like it’s a personal critique (unless you know it is targeted to your service).”
    2 Did you introduce any Lean concepts??
    Yes.  Standardization, however it had both successes and failures.  I can’t say it was due to not having the CEO’s full backing.  The CEO was what we called a “quality-geek” and drove quality ever time she spoke.  In hindsight I can only guess what we did incorrectly, but I may still not have the full insight. 
    (1) We needed to provide JIT training to a senior leadership group who would design the standardization process the company would use (people support what they help to create).  Instead, they left it up to a hand full of us quality-guys, and though we attempted to train what we came up with, it was not supported well because I believe most believed it was being “thrust upon them” since they did not help to create it.
     (2) We needed to develop formal JIT training for the standardization process, then have the training group do a live process using what was taught, and do it over a series of a few hours over a few weeks (everyone’s got a “day job”).  Instead, we tried cramming a lot of theoretical training into them over a couple of hours, do an example in the classroom, and then expected everyone to know, understand and use it going forward.
    (3) Do a better job of implementing the standardized process (better training to people, better follow up [process-audits] and better document control processes to keep up with changes).  Though we had a very good document control process, most people were so busy with the daily operations that they didn’t even know a document control process existed (again, better training and implementation needed).  We also had a very good and thorough change management process, however, key persons were not involved with creating it, and thus acceptance prevented them from embracing it.
    3. How did you decide what was an acceptable AHT
    Historical AHT’s were initially used to set up the overall goals for staffing and the performance needed for the staffing/scheduling plans to work (and meet net profit goals).  Daily, weekly, monthly and seasonal loads were analyzed for proper distribution adjustments and staffing levels to meet the performance requirements (AHT, Service Levels, etc).  Once you have the initial staffing levels to meet the service level requirements, you can put the system (I described earlier) in place.  The focus will always be on the lower performers, investigating how to improve their performance and driving them to better customer satisfaction and lower handle times.  The focus must be on the investigation of how you can help them become better at the process or operation they perform (better at queries, better at call control, better at …).  The customer satisfaction will go up and the AHT will go down. 
    Initially we heard a lot of agents telling us that they can not go that fast and still have high customer satisfaction.  There were two reasons for this argument, (1) “When I give my undivided attention to a customer, and take my time with them, they are more satisfied,” and (2) “I am meeting my AHT goal of “x” minutes or seconds.  I don’t need to go any faster.”
    Our answer was simple, and in two parts; (1) It can be done faster and still have high customer satisfaction because the “teams” average is faster, and most of your team is doing it – “I’m not asking you to perform any differently than the rest of the team.  So your first goal would be to reach the team’s average”, and (2) AHT goals are based on historical performance.  We developed cost and revenue models that have their root premise in the AHT and volumes.  Company profits (if you are an outsourcer) are increased if we can reduce “costs.”  Reducing AHT (along with reduced availability) and meeting the customer’s needs on every transaction is the most direct contribution you can make to improving profits.  If you meet the customer’s needs on every transaction, the volumes will be reduced, too, saving costs to the client.  The gains you receive in your reputation for achieving cost reductions in support calls/emails, will outwieght the loss revenue by reduced volumes.  Your client will be happy to market your value and not only give you additional business, but help you gain new clients.  (If you are an inside team, reducing AHT is also a way of reducing costs and will keep costs from eating away at revenues already gained.  However, here’s another selling point: “if they can meet the customer’s needs on every transaction, the volume of calls/emails will be reduced, customers remain loyal, and that will also reduce costs.  [Customer loyalty will increase revenues.])”
    The above description of agent improvements on reducing costs and increasing revenues is why I’m an advocate of agent incentive plans.  It doesn’t cost a lot to provide additional incentive bonuses to those achieving the highest performance, to those with the greatest improvement, and to the teams with the best overall performance.  We did this with a “success shares” program that was a key driving force to motivate people to perform their best.  (By the way, it was developed mostly by the agents and supervisors themselves!)

    0
    #95622

    Jim Braun
    Participant

    Vinay,
                    
    It seems you’re asking some very general questions requiring very complex and specific answers.  If you can give some further information to clarify some of your generalities, I may be able to assist.
     
    (1)   Please define the key customer process you are referring to.
    (2)   Please define the current metric you are tracking.
    (3)   Please define a defect for this process.
     
    In the meantime, allow me to offer some insight from my experience with managing quality and customer satisfaction in call centers.
     
    I have found there are two types of defects in servicing customers: (1) Defectives, which are defined as anything that occurs that causes the end-user (consumer) to call/email again or to discontinue doing business with the company, (thus the whole call is considered defective) and (2) Defects, which are annoyances, such as misspellings, (thus, the call/email has a few dings and scratches, but  met the serviceable attributes of the transaction) etc.  However, multiple “defects” can turn into a “defective.”
     
    Defectives are defined as inaccurate answers given to the customers, an extreme soft skill breach (cussing at customer, hanging up on customer, etc.), lying to a customer, doing something illegal, and/or not meeting the end-users needs as communicated by them.  (You need to define for your customer, “What would cause them to call/email back because they didn’t get what they needed, or would cause them to stop doing business with you?”  However, I have found this to often be set at the beginning of each transaction by the customer.  You can define the 80 percent that call/email, but the other 20 percent are too specific to the individual customer.  Therefore, we defined the “customer specification” as set by the customer at the beginning of the call/email [i.e., I need to change my order from …] and that set the “defective” specification for the call/email.)
     
    Defects are usually all the other attributes we normally evaluate.  These are found within categories such as professionalism, listening, communicating, opening, closing, etc.
     
    In a service industry, most attributes are not absolute, except whether data was correctly input into a system, or whether words are misspelled.  However, most attributes that can be evaluated with an absolute “go/no go” are not going to cause a customer to stop doing business with a company. (The exception to this is if you set a customer’s expectation and you do not deliver on it.  [i.e., “I will call you back tomorrow at 2pm to see if you received your CD” or Improperly inputting an order and the customer does not receive it, etc.])
     
    In my experience, trying to achieve customer satisfaction in a call center environment is too varied and individual to each transaction to be able to track anything with the precision intended by six sigma science.  However, I have found ways to achieve success using six sigma at the theoretical level.
     
    Please bear with me a little longer.  We first had to reduce variation.  For us, variation was defined two different ways; (1) an agent’s average handle time (AHT), and (2) how each agent performed the process as defined.  The first, (AHT) is fairly easy.  Collecting the time each agent is on the phone (talk time + hold + wrap time) for every call, then average it per week.  The second (performance of the process) required that we mapped the process and fully understood the policies and procedures that controlled the process.  We then had to do several assessments (sitting with several random agents) to see if the process was actually followed as designed. 
     
    (Side Note) As time continued, we used our quality team to monitor the process of agents to separate between poor process performance caused by (1) training deficiencies, and (2) agent discipline issues. When a remote monitoring confirmed poor process performance, we would follow up with a side-by-side.  If they agent, on their best “behavior” still performed poorly, we could categorize the problem as either training or individual capability issues – and offer assistance.  If they could perform the process when “supervised” (side-by-side monitored) but didn’t when remotely monitored, we then assumed it was an individual discipline issue and focused coaching on motivation and possibly job enrichment techniques.)
     
    We next took the weekly average handle times for each agent and put them in a data set for the team (keeping the names attached to the average handle times).  When we charted it using a control chart showing +/- 3 standard deviations, we noticed about 30-50 percent of the agents continuously performed below the average of the team (you can use a simple run chart, too).  These 30-50 percent of the agents were the reason for the poor average handle time of the “team” being where it was. (Normally, we would think that each agent’s times would randomly go above and below the average, however when grouped with their team, we found the “consistently poor outliers” could be identified as specific agents.) 
     
    Next, we targeted investigations into each of these agents performance.  We wanted to find out; “Were their long handle times necessary to provide accurate resolutions, etc… or were others who were faster also providing accurate resolutions and the poor handle time performers needed to improve?”  Once we compared customer satisfaction results of this team, we found that “average handle time” did have a slight correlation to the customer satisfaction results.  What we found was against what we believed; we found that agents who were faster actually had better customer satisfaction scores!  Thus, another insight – the customer not only expects accurate resolutions, but they want it as quickly as possible.
     
    Further investigations with the “poor average handle time performers” mostly revealed they had problems working with the software, using proper queries to find the right knowledge base article to assist the customer, shooting from the hip, etc.  Each of the agents’ supervisors developed a plan to work with (or get help for) their weaknesses.  The agent would commit to an expectation (performance level) and date to achieve.
     
    Note: the agent set the level and date, however the supervisor had to ensure it was sufficient.  If the rate of improvement was unacceptable, the supervisor would ask, “what barriers do you see that will prevent you from reaching “x” by “date?”  The supervisor would need to commit to removing these barriers (if realistic) in order to get commitment from the agent to achieve the rate of improvement needed.)  One last note on this; it needs to be understood by the agent and supervisor that improvement “must” happen.  Failure to improve at an acceptable rate could mean termination.  If communicated properly, agents can understand that they may not be fit for this work.  So long as the company and management are sincerely sensitive to making a fun and comfortable work environment, everyone will strive to perform their best.  After setting a couple of improvement milestones over a few weeks of time, if the agent is not “cutting it” they also know this and are preparing themselves for ending the job (or moving them to an easier queue to meet their capabilities).  You will find that terminating employment is rarely, if ever, needed.  When it is needed, it goes much more smoothly and the agent leaves in a very friendly and professional manner.  Who knows, you may need their services later in a different program.
     
    After three months we began seeing an amazing phenomenon occur: Customer satisfaction results were increasing and average handle times were decreasing (negative correlation)!!  In hindsight, we attributed it to a few factors: (1) By identifying the slower performers and giving them the additional training they needed, we better equipped them to meet customer needs, (2) the Hawthorne effect was probably influencing them, too.  We spent a lot of time with them and we were genuinely concerned with their performance and we were genuinely taking actions to help them (NOT punish or threaten them), and (3) because these metrics (customer satisfaction and average handle times) were being reviewed frequently between themselves and their supervisors, there was no doubt what was most important (inspect what you expect!).  (Note: it was not how fast they performed a call, it was how often they could get high marks from their customers coupled with how quickly and consistently they could achieve this combination.)
     
    Final thought:  six sigma calculations and goals must have hard definitions and measures “in” to have confidence in the results. If your customer cannot define a defect, but “knows one when it happens,” it often becomes a soft measure and is best improved by going directly to customer satisfaction metrics and looking for wider-defined areas for improvement (i.e, costs too much, takes too long to fix, wait too long, call back 2-3 times to get it fixed, etc.).  If you can find solid definitions and metrics, they are probably “dissatisfiers” such as “didn’t get me order on time, etc… otherwise, you will probably be the first(other than AHT, Accuracy, Efficiency metrics, Service Level, Backlogs, On-time, Volume, etc.).  However, the amount of time you spend on doing this is time and effort lost on a simpler approach (that I outlined earlier) and will allow you to make improvements faster and at a level well beyond your wildest expectations.
     
    Call center performance is built on both very complex processes, complex measures and complex people.  We tend to continue to try and find more and more metrics to improve and fail “people” that need more one-on-one personal leadership, assistance, or job enrichment than they need another metric to be measured against.  Don’t confuse these issues!
     
    Good luck!

    0
Viewing 4 posts - 1 through 4 (of 4 total)