# Analyzing Survey Data

Six Sigma – iSixSigma › Forums › Old Forums › General › Analyzing Survey Data

- This topic has 12 replies, 7 voices, and was last updated 18 years, 10 months ago by Fernandez.

- AuthorPosts
- March 20, 2001 at 5:00 am #27136

FernandezParticipant@Esther**Include @Esther in your post and this person will**

be notified via email.Has anyone out there analyzed employee survey data? What methodologies or tools did you utilize? We have 34 questions, ranked 1-10 regarding employee satisfaction in areas related to retention, and we are using several techniques (ANOVA, regression, etc), but wanted to see if anyone else had done a similar analysis.

Also,how familiar is anyone out there with the use of ANOM? Thanks

0March 21, 2001 at 5:00 am #66072The first thing to ask is what does your measurement system analysis tell you about your 1-10 scale and the definitions behind that? Survey data is notorious for having vague definitions of scales resulting in a lot of noise in the response.

The second thing to know is that this type of instrument is opinions and some of your popoulation will give excellent responses regardless of how they really feel, and some will be negative regardless of how positive the activities around them are.

Behavior based surveys with well quantified response scales are a better way to go, but if you go with what you’ve got, you can do the following:

1) Use as a pareto to target improvement activities. Rank both mean and std.dev.

2) Use to judge improvements over time by use of simple F and t tests (survey questions have to remain consistent)

0March 21, 2001 at 5:00 am #66077Did you buy the survey or borrow it from someone else or did you create the survey yourselves. Most surveys that are sold or used in research situations should have the reliability and validity data supplied. Validity of a survey can be measured in many different ways and it is important to determine how the survey has been validated.

If it is a survey that you created; One would definitely want to know what is the purpose of the survey. Did it accurately measure what it was designed to measure or is it valid. Again validity can be determined in a number of acceptable ways.

The question asked is how to analyze or what statistics might be applied after the survey has been completed. There are also many ways in which this can be done such as; item analysis, factor analysis, reliability/validity studies, ANOVAs, etc. The answer is it depends on what you want to know and how accurate is the survey.

Warning though, never send out a survey to employees without a specific action plan in place for when you have analyzed the results. Whether it be good or bad results have a game plan on how to respond to the information.

To do a thorough job of creating, deploying and analyzing organizational surveys you really should consult the help of a professional who has experience in this field. Any university who participates in advanced degrees in psychology should be able to help you in this endeavor.

0March 22, 2001 at 5:00 am #66080

Neil PolhemusParticipant@Neil-Polhemus**Include @Neil-Polhemus in your post and this person will**

be notified via email.If you put the data in the form of a contingency table, there are a number of tools that could be helpful in analyzing that data.

0March 22, 2001 at 5:00 am #66085With regard to ANOM:

Analysis of Means (ANOM) can be used in the same situations as one-way or two-way ANOVA. It’s a graphical technique that was developed as a graphical alternative to ANOVA that would be more intuitively understood by engineers and other non-statistician types.

ANOM plots the mean response for each level of a single factor or each level combination for a pair of factors. Significance is indicated using “control limits”. The resemblance to control charts is a nice feature of ANOM.

0March 23, 2001 at 5:00 am #66086

Neil PolhemujsParticipant@Neil-Polhemujs**Include @Neil-Polhemujs in your post and this person will**

be notified via email.In the ANOM, the hypothesis being tested is different than in ANOVA. The goal in an ANOM is to determine which means are significantly different than the grand mean. This may or may not be an interesting hypothesis in your situation.

0March 23, 2001 at 5:00 am #66088That’s right, I didn’t mean to imply that ANOM is equivalent to ANOVA. Both deal with the null hypothesis that the mean response is the same at all levels, but they deal differently with deviations from the null hypothesis. ANOVA looks at a sum of squared deviations from the grand mean, and significance doesn’t tell you which means are different, just that something is different. ANOM looks at each mean separately compared to the grand mean, so it gives you a significant/not significant decision on each factor level.

The two methods won’t always agree in their conclusions in borderline cases, because they differ in how sensitive they are to different types of deviation from equality of means.

0March 23, 2001 at 5:00 am #66093

Ken MyersParticipant@Ken-Myers**Include @Ken-Myers in your post and this person will**

be notified via email.Ester,

On the side there is a survey development/analysis software package that looks quite promising. It’s called Spinx. Funny name, but the software is quite able to perform a full statistical analysis of the survey data you speak of. I am considering a purchase of it for group use. See the following site for information if you are interested:

http://www.lesphinx-developpement.fr/en/

Regards,

Ken

0March 23, 2001 at 5:00 am #66094

Ken MyersParticipant@Ken-Myers**Include @Ken-Myers in your post and this person will**

be notified via email.Neil,

This is an interesting point you’ve raised about ANOM vs. ANOVA analysis. I’ve had many discussions with members of our Applied Statistics Group concerning this comment. What is the test for ANOVA? My colleagues content you understand something different when performing an ANOVA followed by a post ad-hoc comparison of the means vs. ANOM. However, my logic suggests that if one mean is not significantly different than the grand mean, and the same can be said about another mean, then you can conclude logically that the two means are not significantly different from each other. My colleagues claim there is a fallacy in my logic, but are at a loss to provide any elaboration. Could you shed any light on this point?

Thanks much in advance…

Regards,

Ken

0March 25, 2001 at 5:00 am #66105

Neil PolhemusParticipant@Neil-Polhemus**Include @Neil-Polhemus in your post and this person will**

be notified via email.As I understand it, the ANOM bases its decision concerning rejection of the null hypothesis on a multivariate t distribution. This distribution looks at the position of the k means in a k-dimensional space surrounding the grand mean (in a manner similar to a multivariate control chart). If the point lies too far away from the centroid, it indicates that at least 1 mean does not equal the grand mean. The decision bands show which of the means are beyond their expected distance. At least, that’s my best understanding with the somewhat sketchy explanations that one finds in most sources. If anyone has a good reference where all of the details are carefully laid out, I’d appreciate knowing about it.

The problem with multiple comparisons is that although two means may individually not be far enough away from the grand mean to be declared significantly different from that mean, they may be far enough away from each other to declare statistical significance if that comparison is the only one of interest or one of a small subset. What we do in the post hoc comparisons in ANOVA is carefully control the Type I error of each comparison we plan to make so that all or a selected set of pairwise comparisons may be made without exceeding an experimentwide error rate of 5% (or some other predefined value). There is no such control in the ANOM for pairwise comparisons, so that if you do make pairwise comparisons the overall Type I error is not controlled. As always, the question is “Have we demonstrated a difference large enough that it could not have happened just by chance with a probability of 5% or higher?” Lack of statistical significance does not prove the null hypothesis, it just shows a lack of sufficient evidence to reject it.0March 26, 2001 at 5:00 am #66119

FernandezParticipant@Esther**Include @Esther in your post and this person will**

be notified via email.Thanks Ken–I will take a look at the software.

0March 27, 2001 at 5:00 am #66124

Ken MyersParticipant@Ken-Myers**Include @Ken-Myers in your post and this person will**

be notified via email.Esther,

I’ve found the demo download from the Spinx site does not always work! If that happens to you, then go to http://www.sage.com and locate the Spinx software area. You should be able to get a demo download from there. If you still have problems, then send an email to Susan Radmiller at: [email protected]. Sue will promptly follow-up with a demo that works.

Good luck,

Ken

0March 30, 2001 at 5:00 am #66154

FernandezParticipant@Esther**Include @Esther in your post and this person will**

be notified via email.Thanks for the tip–I downloaded the demo, no problem. Looks pretty awesome! (I could do without the winking Sphinx though!)

0 - AuthorPosts

The forum ‘General’ is closed to new topics and replies.