iSixSigma

Customer Satisfaction Survey Advice

Six Sigma – iSixSigma Forums Old Forums General Customer Satisfaction Survey Advice

This topic contains 34 replies, has 15 voices, and was last updated by  Ang 11 years, 8 months ago.

Viewing 35 posts - 1 through 35 (of 35 total)
  • Author
    Posts
  • #48220

    annon
    Participant

    Forum,
    I want to create an actionable survey instrument and need some SME input by those that have been there and done it.  I would like to use a importance-satisfaction approach if possible, and would like to determine a scaling approach that will allow a meaningful analysis after the fact (I am assuming regression, but open for suggestions). 
    Any thoughts and references (online would be preferred) would be great. 
    As always, thanks for your time.
     

    0
    #161758

    qualitycolorado
    Participant

    annon, Good morning!
     
    Good survey design is both and art and a science.  In the case of satisfaction surveys, it is good practice to have both “scaled” questions as well as open-ended questions, to allow participants to enter “verbatim” comments to expand on their responses.  Do you plan to have both types of questions.
     
    Additonally, surveys, particularly if you want them to be “actionable”, should not be designed or conducted in a void.  Rather, they should be coupled with “focus groups” of survey participants, chosen based on their responses to the questions (if possible). These focus groups will provide you with additional information that should guide the development of your actions.  Do you plan to have focus groups as part of the mix?
     
    Also, after you take some action(s) based on the survey, it is a good practice to re-survey (for “Plan-Do-Check-Act” reasons). Are you planning to do a follow-up survey?
     
    Best regards,
    QualityColorado

    0
    #161764

    annon
    Participant

    Hey QC,
    Thanks for your time.  I am aware of the fundamental aspects of survey creation and distribution, and intend to use the initial survey effort to identify those areas that require more indepth investigation.  I am considering using an importance-satisfaction format with space for additional info if the response generates a score below a stated level combined with open-ended questions and straight Likert scale for others. 
    I wanted to use the more cost-effective survey format to initially identify those areas that will require further analysis in more open-ended formats (phone interviews, face-to-face, and focus groups … resources permitting). 
    But what I was truly after was insight into how best to analyze the survey results once collected and how to scale accordingly.  I have seen 5 and 10 point Likert scales used in our organization in the past that proved less than stellar.  
    Any thoughts on that one?   Thanks!

    0
    #161857

    George Chynoweth
    Participant

    Here are 2 articles that deal specifically with your questions:
    “Actionable Information from Soft Data” https://www.isixsigma.com/library/content/c030602a.asp
    and “Variance Markers in Survey Design” https://www.isixsigma.com/library/content/c030922a.aspI don’t like the importance-satisfaction approach as it doubles respondent burden (and thereby decreases your survey completion rate), and, if you run a multiple regression you will get better information anyway.Enjoy :)
    George

    0
    #161863

    Deanb
    Participant

    Unless you are conducting your survey within an organization with a history and culture of doing marketing research and surveys well, then I would go into this with humble expectations. Sadly, very few customer survey instruments I have seen from non-MR oriented organizations ever seemed to accomplish very much. Most failed to meet their response requirements by a country mile, and even fewer revealed anything actionable they could not have learned better another way. At best, surveys represent a relatively low quality level of actionable intelligence for the buck versus other MR alternatives. MR oriented companies seem to place a high priority on setting up and maintaining strong internal customer data streams. They also invest in informal feedback channels and interpersonal relationships with customers. In other words, they already have a continuous and ongoing “deep knowing” of their customer’s behavior and preferences before they do surveys, and when they do them they use them sparingly to answer little questions, not big ones.Hence, the key to doing surveys successfully often begins with the quality of your starting data system and how mature your organization is in doing comprehensive and continuous MR. If you do not have that capability or maturity, perhaps that is where your discretionary MR dollars should be directed first.Good luck.

    0
    #161865

    Aviator
    Participant

    I agree with DeanB about the necessary preconditions for getting “actionable” information from a survey.  It kind of begs the question-that is why do you need to do a scaled survey in the first place if you trully have the foundation in place to know and understand your target audience.  Too many (almost all of) the time, the goal is to generate a number and then that becomes the focus instead of gaining insight and knowledge.  Problem is that the very best information (my opinion) comes from open ended and focus type discussions, documentation, and synthesis.  Likert scale or not, people’s opinions and perceptions are qualitative not quantitative, and the former is a great source for identifying gaps and innovation opportunities.  In very few cases is it appropriate to analyze survey ratings with descriptive style statistics (exceptions being questions such as “I am between forty and fifty” etc.).  I would keep the importance scaling as it is important to understand the weighting of the question responses-multiple regression will not validly yield this information.  Also, I would summarize the rating scores with percentage distributions by question and rating value (histograms work well here) thus avoiding averages, standard deviations, and the like.
    My two cents FWIW.

    0
    #161870

    Deanb
    Participant

    I hate to sound negative towards honest efforts to do good marketing research. If anything I hope to encourage folks to do more of it. The main thing most surveys prove is the organization stands in great need of basic marketing management. If it gets you to this point, then I guess the survey was worth it.This is akin to a CMM level zero company considering a level 4 or 5 process tool, when they actually need telephones, fax machines, e-mail, and a way to type letters. Surveys are advanced MR tools and are not very relevant when a company has a level zero market management system. Capability maturity matters.Unfortunately, I have become conditioned by so many failed survey efforts that I associate the mere suggestion of doing a “survey” with negative and painful Pavlov-like responses. I see them as time killers at best. At worst there is always a danger that some ill informed executive will make big decisions based on the frail survey data. Sadly, I have seen it happen more than once.

    0
    #161873

    reg
    Participant

    Anon, you’ve opened a real can of worms, here.  Your question raises several important issues:
    1.  Are the metrics of customer satisfaction the right ones to be using for actionable results?
    2.  What are the benefits/drawbacks to the importance/performance paradigm?
    3.  What’s the best source of questions for a survey?
    4.  What types of analyses will produce the most actionable results? (relates back to issue #1)
    I’ll try to address these as succinctly as possible.
    1.  The metrics of customer satisfaction have repeatedly been shown to lack a substantive connection to business results (see Reichheld.  See also Reidenbach.)  People make purchase decisions – whether buying pizza, cars, insurance, or tractors – based upon their perceptions of the value received.  And perceived value is a function of perceived quality relative to perceived price.  This interaction of quality and price (plus image) is simply not addressesed with the metrics of satisfaction.  For that, you need the metrics of value.
    2.  There are several problems with the importance/performance paradigm.  One of those has already been addressed here, namely, the issue of respondent fatigue and attrition.  Another has to do with the distinction between qualifiers and determiners.  Qualifiers are “table stakes: typically very important, but typically not a source of differentiation.  Airline safety would be a good example: very important, but probably not worth investing for differentiation.
    3.  I applaud the previous suggestion to couple surveys with focus groups, but would differ on the timing.  Focus groups (and/or customer interviews) should be the source of your survey questions.  For more information on designing good questionnaires based on value, I’d recommend the ASQ publication, Strategic Six Sigma for Champions: Keys to Sustainable Competitive Advantage, especially chapters 4 – 6.
    4.  Finally, you are correct in assuming that one of the primary analytical tools will be regression-based.  But, for a market-focused definition of CTQs, you’ll need to precede your regression analyses with factor analyses.  Then you’ll need to identify specific competitive value performance gaps – whether positive or negative – because these gaps will serve as the starting point in identifying and prioritizing Six Sigma projects.  Chapters 1 – 3 of Strategic Six Sigma for Champions will explain just how that works
    If you’d like more information regarding the issues you’ve raised, or have additional questions, you can contact me offline.  Providing your organization with the type of information that can drive both competitive strategy and significant process improvements will make your services invaluable.  

    0
    #161889

    annon
    Participant

    George,
    Thanks so much.  I shall continue the research.

    0
    #161888

    annon
    Participant

    I apprciate your input.  Can you give me examples of the underlined portions of your comments?

    MR oriented companies seem to place a high priority on setting up and maintaining strong internal customer data streams. They also invest in informal feedback channels and interpersonal relationships with customers
    surveys represent a relatively low quality level of actionable intelligence for the buck versus other MR alternatives
    My thought was to use the surveys as an internal research tool in an effort to audit the existing degree of quality-based activity between internal customers (ie supplier-consumer).  From these findings, I would hope to isolate those areas requiring more indepth investigation and apply more effective techniques.  This approach appears to balance cost-effectiveness with quality of data.  Is there another approach you would reccomend?
    Thanks again!

    0
    #161891

    annon
    Participant

    So how are you defining the level zero market management system?  How is this characterized?
    And note this is an internally directed research effort to simply audit the degree to which quality-ideas or practices are present.  This information would then be used to craft the depth and breadth of proposed training efforts and in turn, justify the proposal to management.  
    But I would be interested in hearing how informal and qualitative `listening posts` can then be quantified as to their effect.  It sounds as if this is not something you reccomend quantifying.  Why is that?
    THANKS

    0
    #161893

    annon
    Participant

    A,
    Thanks for the advice.  But to your point-there is no foundation in place. This would be an internal baselining exercise to determing training needs and existing quality practices. 
    And would this not be a valid approach to establishing said foundation? Using such techniques as focus groups or interviews to determine critical areas of interest qualitatively (the order of which I inverted in the earlier post….sorry), which can then be applied to a much larger audience – for validation and quantification – using a survey instrument?  What am I missing?

    0
    #161894

    Aviator
    Participant

    Sure I think using both would be appropriate.  I missed the idea that you are trying to establish the foundation.  Better to start simply and parsimoniously in this case maybe.

    0
    #161896

    Deanb
    Participant

    A level zero mkt mgt system in my view would be the complete lack of management focus on listening, analyzing, and solving customer satisfaction issues. This would look like a total ad-hoc approach, or no approach at all. Commensurately, a level one would look like a one man show with no support programs. Level 2 would be a project focus-but not org-wide. Level 3 would be an embryonic org-wide process, but not data driven or integrated or effective yet at solving known dissatisfaction. Level 4 would be integrated org-wide data driven system that was capable of solving and preventing dissatisfaction. Level 5 would be optimized and completely capable of preventing dissatisfaction from ever occurring at all.I must have missed in the thread that your survey was intended for Internal Customer Satisfaction. This changes the discussion in many ways. One way is internal satisfaction is linked to productivity and financial performance in the management literature, whereas one poster referenced evidence it did not relative to external customer research. I can say that my own survey work consistently revealed a strong correlation between employee satisfaction and process capability.One way of evaluating the quality of internal satisfaction as a system might include if there is a management culture or core value that the role of the manager is not merely to satisfy the boss-but to also help subordinates cope and succeed. One best practice I have observed with a large company (recipient of several “Best Places to Work” awards) was how they required all managers to seek out dissatisfaction and solve it amicably. Bonuses were tied to this.

    0
    #161897

    Deanb
    Participant

    Annon,I applaud you for researching your task so aggressively and professionally.In my reference to “internal data streams,” I was still thinking in terms of external marketing, and meant internally generated databases of customer data, such as phone logs, inquiries activity, proposals, sales, back charges, or any other quantifiable data relevant to Customer Relationship Management. This is usually a cross-functional task involving the marketing dept and an ERP system, but it can be done without ERP.Informal feedback channels are internal (employees) or external (vendors, etc) sources that have contact with your customers that can provide feedback about your customer’s satisfaction and relationship with your company.“Other MR alternatives” represent all other data, detection, problem solving, and prevention investments you could choose to invest in. For example: if you have a known major dissatisfaction, it may be more important to invest in solving it immediately than to invest in a survey.IMHO, a survey is akin to SPC. It can point you in a direction, but it is incapable of revealing a root cause or solving or preventing the failure mode. You must go walking around and talk to involved parties to dig deeper. It is when you dig deeper-that is what generates the valid and actionable information.Hope this helps.Good Luck.

    0
    #161951

    Deanb
    Participant

    Annon,When you stated that you want this survey to “audit the existing degree of quality-based activity between internal customers (ie supplier-consumer),” Do you mean you are looking for symptoms of quality escapes (namely dissatisfaction) so you can investigate the quality escapes? Also, are you intending to monitor for quality escapes in one direction, namely by your suppliers hence affecting your company, and escapes by your company hence affecting your customers? Or are you looking to monitor dissatisfaction in all directions, including supplier dissatisfaction with your company, and company dissatisfaction with your customers? I am not clear on the intended breadth of scope of your instrument.Regardless of your scope here, I do believe there are better ways to get this kind of information faster, better, and cheaper, with the benefit of immediate potential for prevention, (or pre-control). This method is continuous and targeted human interaction. Sometimes this can be achieved at one level, sometimes multiple levels need their corresponding contact to manage.Targeted face to face, when done continuously, nets volumes. I have seen managers who spend 2 minutes with each report each day knowing more about what was going on, in detail, than I could ever achieve in even my best executed surveys. A survey might help you catch what the system misses, however your best bet is to invest in the system so you will not need to survey. Good luck.

    0
    #161957

    George Chynoweth
    Participant

    I have to differ with much has been said here. A good survey is reliable and valid and can provide actionable and strategic information that will tell you how and how not to allocate your resources. However, on the face, it is difficult to tell a good survey from a bad one, and most people think they can develop a survey – and often do, with disastrous results. A poorly designed survey can actually provide misinformation or even disinformation. It understandable why surveys are regarded as untrustworthy as a source of real information – and it is an unnecessary state of affairs. Below are some points to consider.Regarding some potential origins of the questionnaire:
    1. A typical starting point for a customer or employee satisfaction questionnaire for a quality organization would be the section of the strategic business plan that deals with customers and employees. Look at the key business drivers and derive your metrics from them. Develop questions you think will measure theses metrics. Don’t do this in the dark – ask managers & employees what they think of the questions you develop. Edit as necessary. Use common sense.
    2. In developing “an internal research tool” for establishing baselines, ask your internal customers what works & what doesn’t. You’ll get varying and possibly contradictory opinions, but your not looking for consistency, you’re looking an item pool for your questionnaire. I worked for DoD for many years, and the best commanders I saw were the ones who got out from behind their desks and into the trenches with their soldiers. They went to the motor pools, firing ranges, security perimeters etc., and asked questions of individual soldiers: “How’s it going? What do you need? What problems are you having?”. Do the same. As implied elsewhere, individual attention provides credibility and PR value, as well as information for your item pool.
    3. Be sure to add at least one “bottom line” question (which will serve as the dependent variable in the regression analysis), such as “Are you satisfied with …”. When running an external customer survey, add two more questions like “Value for the money”, and “Will you recommend us to …”. These will serve as dependent variables for two additional regressions – run 3 separate regressions, one for each dependent var. These additional questions will allow you to get at customer loyalty and customer value in addition to satisfaction.
    4. Include a qualitative item such as “Give us just one idea on how to improve …”. This is very focused, and almost everyone can come up with one idea. You’ll get a lot of data.
    5. Once you have an item pool, run it by some colleagues/managers who are familiar with the metrics you’ve chosen. Get their input regarding which items best measure what you’re after. Keep in mind that you need to balance your objectives with respondent burden (time & thoughtfulness). Keep the questionnaire short & simple. Scaling: This is where many surveys really get it wrong.
    1. Data contain information, they are not information themselves. E.g., 200 PSI is twice the pressure as 100 PSI, but 200 degrees Fahrenheit does not have twice the heat as 100 degrees F. The difference is due to scaling, and the lesson is, you need to design the type of data you need before you start collecting it.
    2. Likert developed his scale in 1929 using 5 points with a descriptor for each point. Looks easy. However, Likert’s scale is balanced (an equal number of positive and negative ratings), and each rating point is visually equidistant from its neighbor. The equal distance requirement supposedly provides interval level data as opposed to ordinal, thus allowing the more powerful parametric analyses to be used. Naïve survey developers usually overlook these FUNDAMENTAL characteristics, and introduce systematic bias into their analyses which just wreaks uncontrolled havoc on explained variance. A particularly deadly scale is something like “Excellent – Very Good – Good – Fair – Poor”. It has 4 positive valuations and 1 negative. It is rarely equidistant, and it truncates the response spectrum – it is akin to saying “Do you agree with me, or do you AGREE with me.” Trash. We’ve come a long way since 1929, but most folks continue using, and even bastardizing, Likert’s scale.
    3. When it comes to measuring opinions, attitudes, beliefs, etc., we have been using a natural rating scale for decades that everyone understands – they even made a movie about it: “10”. As a kid we used to rate July 4th fireworks on a 10-point scale – I’m sure most have done something similar. I don’t understand the continued debate about how many points a survey should have, and whether or not it should contain a neutral point. A 10-point scale provides a broad enough response range that a neutral point is unnecessary. There are also 2 key points with this scale: the rating points must be equidistant, and you should use only 2 descriptors, one for each end of the scale (e.g., Excellent … Terrible). The respondents will be able to fill in the blanks, so to speak, without intervening value judgments provided by the survey developer. This scale has been shown to improve reliability considerably. In my own work, I have never had an internal reliability index (Chronbach’s alpha) below .90 – a typical survey is between .7 and .8. Reliability, in this context can be thought of as Precision of Measurement. 90% measurement precision for a survey is excellent.Data Analyses:
    1. Of course, examine the data and run the standard descriptives. Look for outliers, unusual groupings, normality, etc. Get a feel for your data before you ever start to analyze it – you will better understand the results, and be more attuned to problems should they arise.
    2. Three basic analyses will provide a wealth of information. (1) Convert the item means to percentages and consider them as Performance data (from the respondent’s perspective). (2) Calculate the Coefficient of Variation (standard deviation / mean) – this will allow comparison of variation among all items. (3) Run a regression analysis on each dependent variable. The regression will weight each item regarding its impact on the dependent var – this is “Critical to Satisfaction” information (and Critical to Quality, and Value). You now have Performance, Variation, and CTS (CTQ) data. Is this starting to look familiar?
    3. Analyze the qualitative data looking for common themes. Once you have themes, examine them using the Performance, Variation and CTS data. If you’ve collected any demographic data, you can sort your target groups with the same procedure. You’ll find what’s important, what’s not, and to whom it matters.
    4. When you run the regression(s), you’ll have a statistic called the Multiple R Square. This tells you how much variance in the dependent var was explained by the independent vars. It is a measure of validity. Convert this number to a percentage, and you have an estimate of accuracy. A good survey will be at 85% or higher – most don’t reach 50%. So, if you have an accuracy of say, 88%, this means that 12% of the variance in the dependent var was not explained by the independents. This is due to measurement error, sampling error, scaling error, etc. If you’ve done good developmental work, these errors (residuals) will be normally distributed, with a mean of 0 and a standard deviation of 1. Be sure to examine these residuals. If they aren’t normally distributed, you’ve introduced bias, or uncontrolled error, into you results. This is bad news as we don’t know where or how this error is affecting the results – which is misinformation, making the results untrustworthy.Finally, go back to the beginning. Include an “informed consent” with the survey. Tell them whose doing the survey, why, and what will happen with the results. Tell them if it’s confidential or not. Who can they contact if the have questions? AND, provide feedback – tell them where they find the results (website, newsletter, personal notification, etc.). This is great PR and will help considerably towards the success of the next survey.“Anything that exists, exists in some quantity, and therefore can be measured.” LL Thorndike, 1932. I agree completely. :) Enjoy.

    0
    #161963

    annon
    Participant

    George,
    Really good information.  One last favor:  I would appreciate a suggestion as to reference material, prefereably a comprehensive one from design through analsis.  Thank you sir.

    0
    #161974

    George Chynoweth
    Participant

    Hi Annon,I don’t have exactly what you’ve asked for, as I’ve developed this approach myself: I’ve tried to include SPC in soft data analysis. You can find 4 of my articles on survey development and subsequent analyses here:
    http://www.dxresearch.net/index.cfm?fa=resources.pubsIt includes updates of my 2 articles on this site. Additionally, this is an excellent article on survey development: Morrel-Samuels, P. 2002. Getting the Truth into Workplace Surveys. Harvard Business Review (February).Tufte’s book, “The Visual Display of Quantitative Information” is also excellent.Hope this helps a little. As always, Enjoy Yourself!

    0
    #162011

    Questions for Dr. Chynyoweth
    Participant

    Dr. Chynoweth,
    While I find your suggestions for the most part acceptable, your post begs a few follow-up questions:
    You write:
    Likert developed his scale in 1929 using 5 points with a descriptor for each point. Looks easy. However, Likert’s scale is balanced (an equal number of positive and negative ratings), and each rating point is visually equidistant from its neighbor. The equal distance requirement supposedly provides interval level data as opposed to ordinal, thus allowing the more powerful parametric analyses to be used.
    1. How come that you write that Likert developed his scale in 1929, when Thurstone only published his Measurement of Attitudes with Chave in 1929 and Likert completed his dissertation in 1932 based upon which his article was published in 1932? Could you provide the complete citation from 1929 so that readers on this site can look it up? There is an article by Thurstone from 1927 which is located at the British Museum but cannot be accessed as it is so fragile that it cannot be microfiched. I wander what article Likert wrote in 1929 that I missed.
    2. How come that you write: “The equal distance requirement supposedly provides interval level data as opposed to ordinal”. This is the one misconception of a Likert scale that is continuously repeated. Rather, Likert used internal consistency rather than item total scores to determine his scale level. So, there is more to Likert scaling that just the equal distance requirement.
    3. How come you are not explicitly addressing the serious issues asscociated with variability and its potentially detrimental impact on the usage of regression analysis as a means to key driver analysis? How do you address that in your work? Your 85% cut off appears quite arbitrary (however, this is not an issued when your internal research shows that it works in practice).
    4. Finally, how have you incorporated the past five years of extensive new research in the field of satisfaction research. It seems to me that all of your recommendations predate the work of Wagner, Mittal, Anderson and others. 
    5. I am also not aware of any publications in scientific articles or conference proceedings. Have you published in this arena, and if so when and what?
    Thanks for your time and any new information that I can learn.  

    0
    #162020

    Dr. Scott
    Participant

    Annon,
    Please contact me at dodoc@hotmail.com . I think I might be better able to assist you. I have expertise in customer satisfaction surveys. Even my dissertation was dependent upon it. If you leave a number where I can reach you, I would be pleased to help you. Or we can do it the hard way by email.
    The Likert debate regarding ordinal or interval data is really a mute point now. CLT (Central Limit Theorem) pretty much solves that problem.
    But something called Means-End Analysis is an important consideration. You might find references to Means-End Analysis on the internet, including papers I have written, though I can give it to you straight from the horse’s mouth (so to speak).
    Contact me for help or more information if you wish.
    Best regards,
    Dr. Scott
     
     

    0
    #162031

    Means End analysis
    Participant

    The means end analysis of needs assessment is as old as classical behahiorism. Dr. Scott has finally caugth up with it. We’re all glad for him that he feels so empowered by something that’s been around for over 50 years (with a little triangel, how cute). Congratulatins Mr. Dr. …. yet another one overinflates his experience and knowledge 

    0
    #162045

    George Chynoweth
    Participant

    Dear Questions, Means, and What Conference,The original poster has asked for advice and assistance. Every post but yours has been an attempt to do that. You may note that when I disagreed with some of the advice, I did not challenge or try to embarrass the poster – I simply made my own points for further consideration. And since I had so much ground to cover, I made that response relatively brief. Sorry if I omitted some details that you find of minor importance.This is a help forum for practitioners. For example, rather than take me task, Dear Questions, regarding the work of Wagner, Mittal, Anderson, why don’t you discuss it a bit in the forum for everyone’s benefit? You could have done the same with Means-End Analysis. Rejoinders could have been most enlightening for the community. The attack on Dr Scott (btw, I don’t know him) is really quite embarrassing – it says much more about the poster’s arrogance, and cowardice since it was posted anonymously. The utter lack of thoughtfulness and good faith show really bad form. I suggest to any and all, if you can’t help this community, find another forum.And finally – Annon, if you have the information you need, just close down this thread via non-response as the signal to noise ratio has really degenerated.

    0
    #162046

    Brandon
    Participant

    George, I agree with you. See my post titled “Post Analysis”. This has really deteriorated.

    0
    #162047

    crazy
    Participant

    Means-End,
    Sounds like you think a Dr. has to be always correct about everything. you are being trivial. re-read the posts – Dr. Scott was not off the mark

    0
    #162055

    Dr. Scott
    Participant

    Since you don’t even seem to know your own name, I will just refer to you as Insecure Idiot.
    Dear Insecure Idiot,
    Normally I do not respond to idiot comments that add no value to the forum, but when facts are in question I sometimes can’t resist.
    Having said that: DHL started six sigma in Brazil over 10 years ago. Then migrated parts of the tool set to the US shortly after. The leader of the effort has the initials MC (I know him personally). They did not call it Six Sigma, probably because people like you have helped to degrade the integrity of the name via your lack of knowledge to accomplish anything positive. But they were very successful using the Six Sigma tools.
    As far as my knowledge and ability goes, I am certain my successes speak to that. And if I might speak for others here, I and others would love to hear about your successes (if you have any). By the way, you can purchase my dissertation from The University of Tennessee which directly addresses the best approach to customer satisfaction research. What do your books, presentations, or papers have to say about the topic? If you can’t afford to buy it, then join AMA and research my CSD presentations from them. Or, you can get them from the ICSD&CB conference proceedings.
    As far as “outing myself”, I don’t mind that a bit. I like the idea that people here receive helpful information from me. But fortunately, I am not low enough to try to disbute the knowledge of others just to make myself feel “better” maybe (as you do). Instead, I keep my mouth shut, and my ears and mind open, so that I might improve on my expertise. I would STRONGLY recommend you try this. Who knows, you might even learn how to do a t-test yourself. Or even work you way up to LISREL analysis which is at the heart of analyzing Customer Satisfaction, Dissatisfaction and Complaining Behavior utilizing the Means-End approach.
    In any case, I hope your mental problem resolves soon for your sake.
    I think we know who is the fool here.
    Dr. Scott
     

    0
    #162063

    Brandon
    Participant

    Tom – you’re right – ain’t gonna happen! Nature of the beast I guess.

    0
    #162069

    Champ
    Participant

    Dr. Scott,
    I am new to the site (2 months). I really enjoy your contributions to the forum, as Brandon point out well – those like Idiot who have no significant contribution to the site – they are like the drunk loudmouths in the bars thinking they are the world champs. Glad to see you putting them in their place. I vote for you to keep the Dr. title – it’s these wannabee’s that are intimidated by you. They must have a Napoleon complex or something.
    The Champ

    0
    #162077

    Marlon Brando Name Change
    Participant

    Marlon, Idiot, Fake Accrinton is now confused.Can’t you just stick with one name bud?

    0
    #162086

    Brandon
    Participant

    Confused you may have hit on something. Someone responded to my “Post Analysis” post, Tom I think, who said the garbage that is in this forum is not present on lean.org.
    So I went there. It is difficult to be a part of their forum with out using your real name. That may be a contributor to the problem here – a lot of hiding going on.
    The inner self becomes very clear when you can remain anonymous.
    Can’t help with this one.

    0
    #162096

    Brandon
    Participant

    Stan, Brandon is not my real name. I am using an alias just as are a high percentage of the people on this site, as are you. I am not lying about anything. So your assertion in that regard is unfounded.
    I made an observation about the difference between the tenor of the chats here and those on lean.org. Perhaps I have the incorrect causal factor. Perhaps even if all used real names they would still be as obnoxious and cruel. Just a supposition on my part.
    The fact remains the highest percentage of your posts are putting someone down, very little is contributed by you in a constructive manner. Don’t know much about that pathology so I won’t comment.

    0
    #162105

    Mikel
    Member

    Brandon, I don’t think you are lying about anything. I do know that
    one of the most frequent posters of late that uses his real name is
    lying about his experience and his past.

    0
    #162107

    Brandon
    Participant

    OK Stan – I get it now.

    0
    #162206

    Deanb
    Participant

    There have been many excellent technical contributions in this thread on surveying. One can truly dedicate a lifetime to this science. However, keep in mind that the art side of surveying matters at least as much, and sometimes more, to real success.The survey effort itself also has customers who care about costs and benefits. Earning respect from management and participants ultimately comes from efficiently identifying and solving real problems. An average survey that is part of achieving this always is better than a brilliant survey that merely generates superior data. If you follow-through post survey and make sure something gets improved, at the end of the day others will see you as a capable and valuable survey professional, even if the data you ultimately acted upon was technically external to the survey. Good Luck.

    0
    #169806

    Ang
    Participant

    Einstein said “Not everything that can be counted counts, and not everything that counts can be counted.” There is an alternative to the type of approach you’ve discussed – have a look at: http://www.paramarq.comPeter

    0
Viewing 35 posts - 1 through 35 (of 35 total)

The forum ‘General’ is closed to new topics and replies.