Survey Sample Per Employee – How to Determine Real Outliers?
July 19, 2021 at 5:44 pm #254784
diogogbrasParticipant@diogogbras Include @diogogbras in your post and this person will
be notified via email.
I’m working in the contact center industry. It’s quite common to send out a survey to the end-users at the end of each transaction.
The survey question varies, but the outputs are binary or ordinal. We convert the survey responses into continuous data (ex.: Liker scale 1 to 5, we could the 4/5 replies and divide by all surveys received to calculate the Customer SAT %).
In many cases, the survey sample VS population of transactions closed is quite representative, offering MoE of 2/3% with a confidence level of 95%.
Each employee receives different volumes of surveys. And here is where the plot tickens. Even if we extend the surveys received time to a full month, we will have, let’s say an average of 100 surveys received, however, the standard deviations are super high.
A big part of the contact center work, as customer requirement, is to ensure we lunch outlier management initiatives, based on the survey calculated score in %. However, considering the very high variation in the survey volumes we cannot say with certainty that we have in fact an outlier, simply because we compare results of employees with 100 surveys with employees that have 3.
The current approach that I’m taking is the calculate the individual employee margin of error (sample of surveys vs closed cases for the period in the analysis), and if I purge the low margins of error I’ll get the population cut down by 30/50%.
Is there any way of getting this done with other method?0July 25, 2021 at 12:47 pm #254841
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.
Is there any way of getting this done? – Yes, start over because, based on your description of what you are doing, you are a very long way from being able to make any kind of decisions concerning employee performance.
Let’s do a recap of your initial post.
1. You said,”We convert the survey responses into continuous data (ex.: Liker scale 1 to 5, we could the 4/5 replies and divide by all surveys received to calculate the Customer SAT %).” I think what you meant to say was you have Likert scale data with ranges of 1-5 (presumably these are 1 = very dissatisfied, 2 = dissatisfied, 3 = neither satisfied or dissatisfied, 4 = satisfied, and 5 = very satisfied) and then you arbitrarily take the count of the 1,2,3 ratings and the 4,5 ratings, lump them into two groups, take the count of the 4 and 5 entries, divide this number by the total number of surveys received and call that an estimate of customer satisfaction.
a. You are deliberately throwing away information – you have a 1-5 scale – use it. Lumping the data in this arbitrary manner does not make sense – for example, you are equating extremely dissatisfied with neither satisfied nor dissatisfied.
b. The whole point of having a 1-5 Likert scale and examining the counts in each category is so you can detect changes or lack of changes in trending either as ratings improve, decline, or remain constant. Converting to binary removes this capability.
2. You said,”Each employee receives different volumes of surveys. And here is where the plot tickens. Even if we extend the surveys received time to a full month, we will have, let’s say an average of 100 surveys received, however, the standard deviations are super high.” And later you said,”…simply because we compare results of employees with 100 surveys with employees that have 3.”
a. In other words you have a really bad case of sampling bias. With those kind of differences in completed responses the fact of “super high” standard deviations is exactly what you should expect.
b. The first question you should ask and resolve is why the vast differences in customer response?
1. Are your employees really getting a random sample of customers? How do you know?
2. Are your employees really getting a random sample of types of customer problems? How do you know?
3. How are customer problems classified?
a. If you don’t have some method for problem classification then you need to sit down with the employees and develop a meaningful way to quantify problem type.
b. If you do have a way to classify problems then is there any correlation between problem type and survey completion?
4. Assuming you have one of those 24/7 type contact setups what is the story with respect to day time contacts vs. night time and how are you taking this into account?
a. different kind of customers day vs. night?
b. different manning levels at your place day vs. night?
c. different skill levels of day vs. night workers?
5. …and on and on.
3. You said, “The current approach that I’m taking is the calculate the individual employee margin of error (sample of surveys vs closed cases for the period in the analysis), and if I purge the low margins of error I’ll get the population cut down by 30/50%.”
a. To begin with – how can a ratio of completed surveys vs closed cases be viewed as a margin of error?
b. As I understand this you are doing the following: Let’s say an employee gets 3 completed surveys but successfully closes out 100 problems – you toss this person from consideration. On the other hand, if an employee gets 50 completed surveys and successfully closes out 50 project you keep this person for consideration. If this is what you mean then the question is why are you computing ratios of completed surveys to closed out projects at all? Assuming random problem/problem difficulty and random client across employees the issue is just one of successful project closing.
You didn’t specifically state this but it sounds like you look at a month’s data, run some calculations and then make a decision concerning employee performance – in other words you are judging people on the basis of a single monthly data point.
1. You need to recognize each of your employees is a production line.
2. Since they are production lines you need to look at their trends over time, construct a control chart for each individual and use the results of the analysis of that kind of data to make decisions concerning employee performance.
If what I have posted is an acceptable summary of what you are doing then, as I said, you need to start over and the first place you need to start is taking the time to really understand your process which means, among other things, understanding why there are such huge differences in completed customer satisfaction forms.0
You must be logged in to reply to this topic.