iSixSigma

Rob Brogle

Activity

  • Thanks, Pete. This was written back in September of 2013.

  • Hey Mike,

    As you suspect, there are a number of issues with this survey approach. The fact that the company uses the same sample size for regions having differing transaction totals is okay, but there are a […]

  • Hey Mike,

    There is a lot of information out there about constructing surveys but I don’t know off the top of my head of any studies comparing surveys administered by direct points of contact versus a neutral […]

  • As you pointed out, survey data is often skewed (non-normal) and also strictly-speaking we are dealing with ordinal, discrete data and not continuous data which is required for the type of multiple regression […]

  • Rob Brogle posted a new activity comment 6 years ago

    Just a quick addition about setting targets: I would highly recommend setting performance targets based on the drivers of customer satisfaction rather than the satisfaction scores themselves. Putting targets on things like quality of work, response time to customers, problem resolution time, etc., will be much more measurable and reliable and…[Read more]

  • Rob Brogle posted a new activity comment 6 years ago

    Hey Mike,

    Thanks for the comment. My response to your questions are below:

    1. If your sample size is large (>30) then you can use the formula stated in the article. If your sample size is smaller then that, then you should use the t-distribution formula for calculating 95% confidence intervals of the mean. I just now tried to type it in…[Read more]

  • Actually, it can be estimated. You’ll need to take a random sampling of the non-responders and resend the survey to them, but this time use follow-ups, incentives, hassling techniques, threats, etc. to get them […]

  • ThumbnailWhen the Ritz-Carlton Hotel Company won the Malcolm Baldridge National Quality Award for the second time in 1999, companies across many industries began trying to achieve the same level of outstanding customer […]

    • Hey Mike,

      Thanks for the comment. My response to your questions are below:

      1. If your sample size is large (>30) then you can use the formula stated in the article. If your sample size is smaller then that, then you should use the t-distribution formula for calculating 95% confidence intervals of the mean. I just now tried to type it in here, but without a math font it’s pretty much unreadable. However, you can find on Google quite easily if needed. Now these formulas assume that the data is more or less normally distributed. If, instead, your data is highly skewed (which is often the case for survey data), then it’s better to use median scores instead of means. In that case, use the confidence interval formula for medians (which you can also find using Google).

      2. Be sure that your performance targets are outside the confidence intervals of your baseline data. This is important–if your targets are set within the confidence intervals then you can hit or miss them based purely on chance. I’ve seen many cases where the maximum value on the scale (e.g. 10 in a 1-10 scale) falls within the confidence intervals of the baseline data. This indicates that the sample size is too small to distinguish any improvement in customer satisfaction.

      3. If you have hit your target, run a 2-sample t test on the “before” and “after” data to determine whether or not the improvement is statistically significant. If you are using median scores instead of means, then run a Mood’s Median test. A p-value less than 0.05 indicates that you can be more than 95% certain that the target was reached due to a real improvement in scores and not due to a statistical fluctuation of the data.

      Hope that helps–let me know if you have any additional questions…

      — Rob

    • Just a quick addition about setting targets: I would highly recommend setting performance targets based on the drivers of customer satisfaction rather than the satisfaction scores themselves. Putting targets on things like quality of work, response time to customers, problem resolution time, etc., will be much more measurable and reliable and will go significantly farther in driving the kinds of behaviors that you are looking for to make your customers happier…

    • Actually, you have two issues here: (1) very small sample size and (2) very low response rate. Let’s look at the first issue:

      At a sample size of 15, your 2 out of 15 top-box responses give a sample proportion of 13% but the confidence intervals for the “true” population proportion are between 2% and 40%. So there is a huge uncertainty there due to the sample size of only 15 (although it seems that even at the high end of the uncertainty you are still well under the 89.5% goal). Of course, this all assumes that those 15 responses actually represent the population (the 300 people that you serviced). Which leads us to the second issue.

      The second issue to me is the bigger problem. At a response rate of 5%, there is a high likelihood of a non-response bias. Only 15 out of 300 people were motivated to answer the survey and so it’s unlikely that those responses are representative of the entire 300. If the motivated 15 had a higher proportion of unsatisfied folks than the “silent majority” of non-responders (often the responders have a negative bias compared to the non-responders), then you wind up getting penalized because of this bias.

      Unless response rates are high (> 80%) and statistical uncertainty is taken into account, survey results can be very misleading and can lead to bad decisions, unfair evaluations, and all kinds of other nasty things. It is much, much better to evaluate people based on the concrete things that we know drive customer satisfaction and loyalty: fast responses, short problem resolution times, high quality of service (as defined by specific actions), etc. These attributes CAN be measured accurately and improving those attributes WILL make customers more happy (although this increased happiness may well be missed on a small-sample and/or low response-rate survey).

      I think sometimes it’s easy for leadership at a company to throw out surveys as a means of evaluation without really thinking about what they’re doing (and how the misleading results are hurting the employees). This is unfortunate, but also very, very common.

      Hope that helps…

    • Wow!

      Thank You.

      Some very interesting points and I also recommend the readers to have a look at the 2nd and 3rd chapter in Darrel Huff’s book How to lie with statistics.

      Thanks again!

  • Rob Brogle changed their profile picture 6 years, 10 months ago

  • Please note that the scope of the article was to address the technical body of knowledge that an MBB should possess, not the skills required for that position. If you read more carefully, you’ll see that the […]

  • Certainly it is true that many business and process problems can be well-addressed by simple approaches such as Ishikawa’s Basic Seven Tools of Quality, lean fundamentals, etc. It is important for an […]

  • Yes, an MBB should definitely have an additional body of knowledge above and beyond that of a Black Belt. We will be posting a recommended MBB BoK in an upcoming article.

  • In response to these three “red flags”:

    (1). Certainly project management and communication skills are critical for a successful Black Belt, just as they are critical for any manager, supervisor, team leader, […]

  • Rob Brogle became a registered member 7 years, 11 months ago