iSixSigma

Survey Design Help Needed

Six Sigma – iSixSigma Forums Old Forums General Survey Design Help Needed

Viewing 25 posts - 1 through 25 (of 25 total)
  • Author
    Posts
  • #43917

    Dawn
    Participant

    My boss wants me to survey field users (5,000+) to determine how they feel about a functionality that was introduced a few months ago into a software application that our sales reps use. We’ve developed an online survey, which we intend to administer by sending email requests to a sample of the users. We’re concerned, though, that because the respondents will be self-selecting (within the sample), the people who are most dissatisfied will be more likely to respond than those who are satisfied, thereby presenting a skewed image of the entire user-base opinion. So we are considering doing a follow-up phone survey, phoning out to a smaller sample, with the idea that we would then compare the results of the phone survey with the online survey and see if there is a discrepancy.
    I have a couple of questions:
    – First: Is this a good idea? Is there a better way to go about this (maybe do the phone survey first, then using the online survey to confirm the phone findings)?
    – Second: If I do a phone survey, is there a way to select a sample that will allow me to use a very small group and still get reliable results, and should I draw the sample from the overall user group, excluding the ones who were asked to complete the online survey? I’ll probably be doing this myself, or perhaps with one or two others, and the target group (the sales reps and support personnel) are away from their desks a lot, so it’s hard to get in touch with them. I need to keep the sample size small, but not at the cost of getting data that’s useless. This also needs to be completed rather quickly – probably within 30 days.
    I haven’t been through Six Sigma, but my boss has and she suggested I contact this group for help. Thank you for any recommendations you can offer.

    0
    #139957

    lin
    Participant

    There are a number of things you can do.  If you want an accuracy of +/- 5 %, you will need to send out the survey to about 357 people.  That is too many to call, but you could send out the survey via e-mail and follow-up with those who do not respond.  If leadership does it role in making sure people know this important to respond to, you hopefully won’t have too many to follow up with.

    0
    #139959

    Andejrad Ich
    Participant

    You are trying to assess the general opinion of a population of users.  So, do that by selecting and using one sampling method (…and not using a second method to confirm the results of a first method).  And you are right, given optional response, the complainers will tend to answer/vent and so your results will be skewed (so getting subjects of a random selection on the phone would be a better indication of the population).
    Really, instead of making this a statistics exercise, have you considered the practicality of just calling your highest volume users and asking them?…(maybe that’s only something like only the top 100 or 80 or even 20 of your 5000). 
    Andejrad Ich

    0
    #139960

    lawrie
    Participant

    hello,I am Chinese,i just want to see if i could post a message in this forum.Thanks.

    0
    #139961

    Dawn
    Participant

    That was our original plan, but then we got concerned that in a self-selecting group (since we can’t force them to respond, we can only ask), a higher percentage of “dissatisified” users will be more likely to respond than “satisfied” users. That’s why we were thinking about using a phone survey to validate the results. But then, I didn’t know whether the phone group should be a sub-set of the original people surveyed or a different group drawn from the total population (the latter makes more sense to me). And, is there a way to sample a much smaller group for the phone surveys and still have valid results?

    0
    #139962

    Dawn
    Participant

    We were originally going to survey only recent users, but then a report came through from another group about people who aren’t using it, so now they want to know why. That’s why we decided to go for a sample of the entire population instead of just known users. I’ll do some more thinking about what info we could pull from that approach, though.
    This isn’t my strong point. Is there a reason, other than use of resources, that we should not use a second smaller method to validate the results of a larger survey?

    0
    #139964

    HF Chris
    Participant

    “Functionality that was introduced a few months ago into a software application”
    There seems to be something missing here….what testing of the functionality/usability of the new software was done before and during the implementation? Have you used a focus group to measure the usability of the software; for instance, how many clicks to get to a desired function, can the user return to the starting point, who is the user?
    You should have a pool of information already, if not go to a usability firm who is trained in such matters. You are asking usability questions unsure of who you target audience is. You need to understand what and who you are tying to measure first before you wonder if they will respond.  Just a thought.
    HF Chris

    0
    #139965

    Hans
    Participant

    You are completely forgetting the key point of your research: what is the action that you want to take based on your “findings”? You should be more concerned with specific ways that the new functionality improves productivity or whatever it is that you incited the company to make the investment than a “scientifically” sound response rate. So, get out of your cubicle and walk over to a few end-users and talk to them and let them show to you what they are currently doing that they couldn’t do, and let them show to you what they can’t do that makes the “dissatisfies”. Six Sigma is not an exercise in scientific polling but in improving defect rates. Sorry, that this sounds harsh, but 50% of the answer is the question that you ask. It appears to be me as if you haven’t thought through the question that you want to get answered.

    0
    #139966

    Hans
    Participant

    You are completely forgetting the key point of your research: what is the action that you want to take based on your “findings”? You should be more concerned with specific ways that the new functionality improves productivity or whatever it is that you incited the company to make the investment than a “scientifically” sound response rate. So, get out of your cubicle and walk over to a few end-users and talk to them and let them show to you what they are currently doing that they couldn’t do, and let them show to you what they can’t do that makes the “dissatisfies”. Six Sigma is not an exercise in scientific polling but in improving defect rates. Sorry, that this sounds harsh, but 50% of the answer is the question that you ask. It appears to be me as if you haven’t thought through the question that you want to get answered.

    0
    #139967

    melvin
    Participant

    Ich,
    Are you certain this is a Six Sigma project?  Maybe you should take it to a Survey Design forum.   There’s Six Sigma and there’s survey design and never the twain shall meet.    It’s not like it’s Lean and it overlaps so well with Six Sigma that they cohabitate comfortably in a single forum.
    Bob

    0
    #139969

    Dawn
    Participant

    Hans, thank you for your response. The functionality has already introduced based on claims made about what it would provide. The point now is to find out whether the expected results have been realized. Speaking to the users around me would not give me a perspective on what a broad range of users thinks, which is what I need. The survey does not ask “do you like it” – it asks specific questions about the specific functionalities that were promised…for the purpose of identifying what they can do and cannot do. My hope is that when the results are reported, the pain points that still exist – or that were created – after the funtionality was introduced will be addressed. But in order for that to happen, I need to find out what those are and I need to be able to show that a significant segment of the users agree on what those pain points are.
    I appreciate your taking the time to respond, but I’m not looking for advice on how to design the survey. I’m looking for advice on how to administer it in a way that I can confidently tell management the results shown by the survey are representative of the whole.
     

    0
    #139970

    Dawn
    Participant

    That was all done when the functionality was developed, during the UAT. It’s outside the scope of what I can affect and what I have been tasked to find out.

    0
    #139971

    Andejrad Ich
    Participant

    …sort of my point, Bob…
    …that’s why I suggested just calling top users to find out what they think (there’s really nothing six sigma about any of it – and the desire for statistical results is likely rooted in some manager’s intense desire to justify his/her own prior decision to implement a software change (after all, it’s time for mid-year reviews). 
    Andejrad Ich

    0
    #139972

    HF Chris
    Participant

    Dawn,
    I do not think that it is out of the scope. You introduced new functionality…did you meet your goal with a wide population? Did it represent the usability baseline? The goal is what did you intend to change and did you make a valid decision? Did the changes become confounded with unintended changes? What was you baseline in sales before the change and after the change? Did the volume of customer calls specifically increase with the introduction of the new software? Did the changes show no apparent affect to any of your current statistics? Look at the data you have know from call volume to type of complaints or praises. Target the largest complaining variable. Do you think that a happy customer would not like those changes too? Someone once told me you should measure what you’re not doing well to be successful.
    HF Chris
     

    0
    #139973

    Andejrad Ich
    Participant

    Dawn,
    Guess what?  “…then a report came through from another group about people who aren’t using it, so now they want to know why”
    You know what that means, don’t you?  You really just want to survey “people who aren’t using it.”  That’s your population.
    If that’s the case, and you in fact restrict your survey to that sub-population, then I don’t think it will matter whether you survey online or by phone as long as you do so randomly. 
    If you want to know what would-be users don’t like about the software, target users who used to use it but have since stopped (presumably because they don’t like it).
    Also, as Bob has reminded me, there is little about this that makes it conceivable as a Six Sigma project (in fact, I haven’t been able to make it fit yet – but I like survey science because it’s really easy to mess them up). 
    Andejrad Ich

    0
    #139974

    Dawn
    Participant

    You’re right. My first choice for the target group was the ones who used it and stopped. Unfortunately, we haven’t been able to come up with a reasonable way of getting those names.
    It sounds to me like Bob is right, as well. This isn’t a Six Sigma question. As I mentioned initially, I haven’t been through the training and was acting on a suggestion. I appreciate the help you all have offered. I did get a couple of good ideas to follow up on.

    0
    #139975

    HF Chris
    Participant

    Usability DOE…..or make a change……hope it works……send out rebates for happy responses and go under.
    HF Chris
     

    0
    #139976

    Dawn
    Participant

     you think?  ;-)

    0
    #139977

    HF Chris
    Participant

    Dawn,
    Do the survey on a naive audience or an audience that has not purchased the new software and send them some little rebate incentive to take the test. Usability 101 is just as much six sigma as 5 why’s, DOE’s, and undesired effects.
    Chris

    0
    #139978

    Hans
    Participant

    So let’s assume that you have thought through the questions of survey objectives, translating the survey objectives into statement or questions and attaching a scale such as agree – disagree, yes/no etc.
    Your main question then concerns the question how you will get a representative, non-biased response from a population. Now you have to follow these steps:
    1. Define your population: (a) current users, (b) current users + previous users, (c) current users + previous users + potential users etc.
    2. Develop the :”population frame” for your population. The population frame is a “list of those in your population that you can acutally reach”. Every survey researcher has the challenge that the population frame and the actual population may not be totally identifical. So, this is the first bias that you may introduce into your sampling, simply because you are missing records.
    3. Once you have developed your population frame, you have two options: (a) take a sample of the population frame, (b) ask your entire population frame
    4. If you take a sample of the population frame you need to make sure that the sampling is random (unless you understand the art of systematic sampling which I will not go into). If you have sub-groups such as “current” users vs. “previous” users, you could weight you sample in both groups based on your population frame. You can also do the weighting later by weighting the respondenses from both groups based on your original population. Excel or Minitab wlll allow you to randomly select a group of respondents. In market research, the rule of thumb is 200 responses overall. You can go to 400, but beyond that your sampling error diminishes at an incrasing rate. Thus, we typically stick with 200 responses if we can get it. If the sample is truly random, the actual sample size doesn’t matter much because you can calculate the sampling error. If it is not random, your sample size doesn’t matter, no matter how large your sample is, unless you reach a full census.
    5. The next question you need to answer is the mode of administration: telephone, mail, e-mail, web-based. If you have web-based capabilities in your organization use those. I would not be too afraid of using e-mail even though this may bias your responses because the respondent can be identified. Mail surveys typically have the lowest response rates, and telephone surveys are expensive. Pick your battle, whatever you’ll do someone will find an intelligent way of identifying “bias” in your sample. Make sure you have two or three follow-ups and note the date of the response. Response bias is typically identified by comparing early respondents with late respondents (simple t-test). If there is a significant difference you have response bias, but you gently ignore it :-).
    The key question is: How much measurement error will you generate?
    Error 1: sampling error = error due to sampling … can be determined through random sampling
    Error 2: bias due to questions = make sure you follow the rules of proper question writing
    Error 3: bias due to difference between population and population frame (most likely your biggest problem, from what I can glance)
    Error 4: bias due to non-response = something you have to live with.
    Having a PhD in Experimental Social Psychology, and having done this type of research for the past 20 years, I can tell you that there will always be an intelligent person who will challenge your results. As Karl Pearson said in this type of situation “Statitstics on the table, gentlemen”. (Stigler wrote a book with the same title on the history of statistics).
    Is this a Six Sigma project? Sure, call it whatever you want to call it, you’re identifying the defect rate after implementation. As you don’t have a pre-and post-scale, your best way to deal with this issue is to include a scale that asks if the respondent thinks that the product or whatever you are investigating is “better, same, worse” than before. You can then use regression analysis to identify key drivers of this perception with subsequent cluster analysis to identify who the respondents are. Try to get some important demographics on your population as well. The book by Hayes is pretty good, even though in this field we tend to keep the secrets of the trade to ourselves.
    Unfortunately, survey research with very misunderstood in Six sigma and American popular culture … so do the best that you can do. Everybody else makes the same mistakes :-).
    I hope this helps,
    Hans

    0
    #139980

    Andejrad Ich
    Participant

    I printed a copy for my notebook of worthwhile references (…although the solid content didn’t really require being propped up by the academic credential reference…but bravo to you on the education). 
    Andejrad Ich

    0
    #139981

    Hans
    Participant

    I appreciate the compliment … and the “hint”. By the way, I also think your comment about heavy users is very valid!

    0
    #183742

    Ryan Smith
    Member

    Ok,there are many ways to deal with the situaltion easily to please your boss. Not many requestor know that a Word document can be used as an off-line survey. You can request for a customized survey.

    0
    #183743

    Gutierrez
    Participant

    Can the team do a suvey where we need a file upload functionality? What about the confidentiality?
    – Alex

    0
    #183744

    Ryan Smith
    Member

    Yes, they already showcase a smaple file uplod functionality survey on the sample page.

    0
Viewing 25 posts - 1 through 25 (of 25 total)

The forum ‘General’ is closed to new topics and replies.