iSixSigma

Training Evaluation Analysis

Six Sigma – iSixSigma Forums Old Forums General Training Evaluation Analysis

Viewing 5 posts - 1 through 5 (of 5 total)
  • Author
    Posts
  • #44571

    reg
    Participant

    I want to evaluate the various training seminars within my company.  The company has a standard rating system that is rated on 1 to 5 scale for the following catergories: Overall Course, Instructor, Materials, and Information Applicability.  Are there any techniques to analyze this data?  I want to be able to tell wehn to undate materials vs. provide instructor more training. 
     
     

    0
    #143079

    Hans
    Participant

    REG,
    Based on ten years of validating training related evaluation surveys, you can probably be safe by using the following rules:
    1. Mean
    up to 4.2 = no problem; 3.8 – 4.2 = some individuals have issues, worth investigating, 3.2 – 3.8 = needs follow up, less than 3.2 = very low performance
    2. Standard deviation
    up to 1.0 = high agreement among different raters, greather than 1.0 = you may have different segments
    3. Comment analysis
    Review the comments, they are the most helpful hints.
    4. Correlation analysis
    If you correlate the individual scales with the overall satisfaction scales, high correlations mean that these scales most contribute to increase in overall satisfaction. I have not rules of thumb here as the correlation depends on variability of the data and the sample size (amongst other factors).
    I am well aware that a 5 point scale is not considered an interval scale, but the rules of thumb above have helped many organizations to systematically improve their satisfaction ratings with training and ultimately ensure transfer of learning (even though this needs qualification; most people don’t understand that the scale level is not driven by the 1 – 5 rating, but the underlying structure of the different individual scales … but that is not important).
    The above is just based on my experience and I have no claim of this being the only way to utilize the survey information. It may work for you as well. Please also consider that the cut-off scores have an error of margin, so if a scale shows a level of performance of 4.18 or 4.19 instead of 4.2 you will have to use your own judgment as to how to treat them. The comments will give you a guide on how to interpret the scales.
    I hope this helps.

    0
    #143083

    reg
    Participant

    Hans,
    Great info!  Do you have any suggestions when the data appears skewed (most responses are between 3 to 5)?  Not sure it matters? 
    REG

    0
    #143085

    Hans
    Participant

    REG,
    Yes absolutely graph the data. What you will see is that if you have a mean of > 4.2 and a standard deviation of less than 1 you will get a distribution that has a majority of responses in the 5 category followed by 4s and 3s. This is where you want to be.
    As your mean goes down and the standard deviation goes up you will see a clear move of the frequency distribution.
    Two additional points: in rare cases you will see a bi-modal distribution on a 5 point scale. That is when you have two distinct groups of respondents. This happens for example when you roll out a training developed for field engineers and provide it to systems engineers. In one case it was site specific, i.e. the training was positively evaluated in one site, but negatively in another site. Once you see a bi-modal distribution you will have to identify what stratification variable accounts for this phenomenon. I always use the comments as my first guide.
    Also, be careful on how you treat the mid-point 3. If you have an additional option N/A and your scale at 3 says Neither agree nor disagree (or something to that effect) in many cases respondents who should use the N/A category will choose 3 instead. This is why I prefer to use a 6 point scale with no mid-point. There a psychometric reasons for using a mid-point, however, in my experience the advantages of an even scale outweigh the advantages of an uneven scale. This is my experience! There are articles that say differently, but their research was not done in a training setting and not validated by follow up interviews and focus groups on reasons for response choice. Like I said this research is unpublished, so a cautionary note is in order.
    In any case, encourage comments. While the scales will tell you “what” the problem is, the comments can give you an indication of “why”. I always used both quantitative and qualitative data to support my analysis. As a matter of fact, when writing a report, I started out with the summary of statistics (mean and standard deviation), followed by an exact transcript of the comments and finished the report with a summary evaluation and follow up action items.
    I hope this clarifies your questions. Regards!

    0
    #143099

    reg
    Participant

    Thanks for your help!

    0
Viewing 5 posts - 1 through 5 (of 5 total)

The forum ‘General’ is closed to new topics and replies.