Can We Add Samples of Different Sample Sizes to Get a Yearly Result?

Six Sigma – iSixSigma Forums Industries Financial Services Can We Add Samples of Different Sample Sizes to Get a Yearly Result?

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
  • #54262

    My employer does a client satisfaction survey of my clients every 4 months. 1st with n=14 2nd with n=4 3rd with n=6 The employer calculates the % for each of the 15 questions asked. At the end of the year they add the three results together and get the yearly percentage for each question. Seems to me that this is not valide. Am i wrong. Thanks for your feedback. Troy


    Chris Seider

    Unsure what they are trying to accomplish. A weighted % at least would seem logical but a metric can be designed any way management wants. Is there a purposeful unequal number of clients surveyed every 4 months?


    Robert Butler

    You will have to provide more details before anyone can offer much in the way of comments.

    1. When you say “they add the three results together and get the yearly percentage for each question” what does this mean exactly. It sounds like they are summing separate percentages but then again it also sounds like they are taking all of the results at the end of the year and computing single percentages.

    2. Is this just a first attempt at a survey or do they have data from past years? If they have prior data I hope they have the following

    a. Know which survey result corresponded to which year and respondent
    b. Haven’t changed the wording of the questions during that time
    c. Have kept a record of who did and did not respond
    d. Have kept a record of which customer got what part of your product mix
    e. Have kept a record of the percent of your product mix purchased by the respondents.
    f. Know how to run an analysis to identify significant trending
    (all of this, of course, assumes the questions are acceptable with respect to capturing the information you think you are collecting).


    Skip Pletcher


    We can rather easily combine the results of different surveys comprising different sample sizes, but the statistics derived from that combination and the interpretation of any results drawn from those surveys requires significantly more analysis than would be needed for a single survey.

    As you suspected, “adding” percentages doesn’t work at all for statistical validity (unless each survey item is yes/no and the number of clients and items answered in each survey is consistent across all three surveys). If the clients who answer surveys are different every four months, then a better way to calculate annual percentages is to add up each survey item response and then divide by the total to get the percentages.

    Here’s why:

    Let’s say you start with 14 people who love your work and max you out (10 out of a possible 10). You start the year with 100% rating.

    Then you go to training and they sugest a new style for your work. It doesn’t work for you, so get rated 1 by all four clients, for a 10% rating overall.

    By the time the results come back, you’ve disappointed another three clients, realize from those results that the new style isn’t working, and go back to fully satisfying the final three clients.

    (if all clients are different) You satisfied 17 people and disappointed 6. That’s 17 out of a possible 23, or (almost) 74% satisfied. But if I add your percentages for the year (100% + 10% + 100%) your clients were 210% satisfied!??? (or do they divide that by three surveys to get 70%, which also wrong)

    We could get deeper, but please keep in mind that most people don’t fill out surveys unless 1) they are very unhappy, 2) they are very happy, 3) they get something in return for completing the survey, or 4) they also get graded by survey results.

    All that having been said, the more important question is not how we arrive at the number but how the numbers are applied. If the same wrong method is used for you and for your peers and those results are used to provide some relative ranking of how well each employee satisfies clients, it may be a fair assessment in the same way that referees who miss calls at a football game for both teams tend to provide a fair result even though they let both sides get away with cheating.

Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic.