iSixSigma

AHT Sigma Capability question

Six Sigma – iSixSigma Forums Old Forums General AHT Sigma Capability question

Viewing 20 posts - 1 through 20 (of 20 total)
  • Author
    Posts
  • #49981

    Justin
    Participant

    I’m working on a SixSIGMA Green Belt project on AHT. The AHT target is 6 minutes. After analyzing our initial AHT data, I decided to manage outliers (agents with the highest AHT.) The Sigma Capability actually went down after doing this, and I’m not sure why. Here’s the data:
     
    Initial AHT data:
    Mean = 8.63  StdDev = 2.32
    VSF = 1.61USL = 6LSL is Not DefinedSigma Level = -1.1339Sigma Capability = .37DPM = 871,572
    Yield = 13%N = 50
     
    Managed Outliers data: Managed 22 of the 50 agents’ AHT down to 8.63 (the mean). After doing so, you can see that the Mean, StdDev & VSF all improved…but the Sigma Capability & yield did not. This is where I’m stuck. Any ideas on what is causing this result? Thanks for the help. Much appreciated! :)
     
    Mean = 7.8091StdDev = 1.4279
    VSF = 1.19USL = 6LSL is Not DefinedSigma Level = -1.2670Sigma Capability = .23Cpk = -.4223Cp is not availableDPM = 897,416
    Yield = 11%N = 50
     

    0
    #171601

    Taylor
    Participant

    Justin, A quick glance just proves your process is too out of control to even tell what is happening. Evening running the data in a different sequence will probably yield a different result. Go back and look at your data collection and focus more on the DPM to start with.

    0
    #171602

    Vallee
    Participant

    Justin,You do understand that you have completely changed your distribution… right? If 22 of the 50 agents are out of your usl of 6 why would you covert them to the mean? Besides when you attempt to normalize your data you would convert them to closest non outlier position not that I would do it with close to fifty percent of my sample. You may need to check if you have a bi-modal distro… otherwise than that it is out of control. My suggestion respectfully is to stop messing with the numbers and understand the process you are measuring.

    0
    #171603

    Mikel
    Member

    This is easy. Your target is obscene. AHT should be

    0
    #171609

    Vallee
    Participant

    Stan,Regardless of the question of AHT target value, Justin does not understand what his numbers really mean and his influence on the output. Maybe I have been out of the classroom too long but since when do we convert 40% of the data that are outliers into the mean?

    0
    #171615

    Mr IAM
    Participant

    Stan,  This is funny… Telling someone what an appropriate AHT target is without knowing thier industry or application?  Come on….  – M

    0
    #171616

    Mr IAM
    Participant

    J – I agree with some previous posts here… Assuming your calculations are correct and your using a stats program to do the analysis then there likely is an issue with shape of your distribution.  I would check the normality and stability of the data and use a larger sample size.  Is your samples size of 50 AHT for one day for 50 agents?  A week? A month?  Also, are all the agents taking the same types of calls?  If one group is taking service calls, and others are taking sales calls the distribution is going to be crazy…  – Cheers.. M

    0
    #171648

    BC
    Participant

    Justin,
    Remember the calculation:  (USL-mean)/s.d.  The sigma level is improved a bit by lowering the mean, but it’s still negative because mean>USL.  Since you also lowered the s.d., you’ve made a larger negative number, i.e., a worse sigma level.
    In layman’s terms, although you improved the mean, you kept it above the USL, and you have simultaneously made the distribution more consistently above the USL.

    0
    #171664

    MrMHead
    Participant

    From BC:   “In layman’s terms, although you improved the mean, you kept it above the USL, and you have simultaneously made the distribution more consistently above the USL.”
    That’s what I was trying to think of!:
    It has become more precisely wrong – less chance of accidently being right.   (and there’s something about a blind squirrel in there too)
    But I still struggle a bit with the concept of a negative s.d.  Does that lead to an area multiplied by the sqrt of -1?

    0
    #171665

    BC
    Participant

    MHead,
    There are no negative standard deviations in this example.  The only thing negative is the sigma level.  Sigma level is not a standard deviation; rather, it is the distance of the mean from the USL expressed in units of standard deviations.  In other words, a sigma level of -1.2 means that the mean is 1.2 times the standard deviation away from the USL, and the negative means that it’s OUTSIDE the spec.
    Just like I can express the height of a building in feet, meters, or number of Coke bottles, I can express the distance of the mean from the USL in the original unit of measure, or in number of standard deviations.  Standard deviation itself is still positive.
    BC

    0
    #171667

    Outlier, MDSB
    Participant

    Justin,
    BC is dead on here. Until you move the Mean below the USL, you will not see any significant improvement in sigma level. The nugget of good news is that if your control chart suggests that your process is now stable, you can focus on ways to move all of your operators to shorter calls rather than having to chase down and investigate all the outliers. So on the Accuracy vs. Precision scale, you have made your process more precise (but still missing the target) so now you can aim everyone at the target.
    Good luck.
     

    0
    #171668

    Vallee
    Participant

    How is his process stable when he moved 22 of the 50 data points to the old mean? We still don’t know as stated before if this could be a bi modal or any other type of pattern. He changed the numbers not the process and it appears he also violated the rules for outliers in the first place. BC was dead on with his statement of USL and what happened to the numbers which was also stated previously.

    0
    #171669

    Taylor
    Participant

    Justin
    I have taken another look at this after reading some of the comments. First of all, by managing your 22 data points you have taken normal data (assumption) and made it non-normal: this is the reason for the yeild calculation deminishing. My suggestion here, just for sake of analyzing the data, run a test for normality with the original data set, then set your target as a centerline, and making your upper spec limit =12 minutes. Run a test for normality and then determine if you have any outliers, special cause, etc. I believe you will see the same data points as special cause if any. Basically all I’m asking you to do here is put your data set in a form you can analyze. The current setup obviously has more than 75% of the data points above the spec limit. very difficult to analyze. Also run Cp on your original data set. This will give you a hint as to what you could be capable of if your process was centered.

    0
    #171672

    Marlon Brando
    Participant

    There is no negative SD

    0
    #171834

    Nia
    Participant

    I recently ran an AHT improvement project – the data was not normal but to understand it more we broke the component parts of AHT down and ran them through control charts – this was to understand which part of AHT was in or out of control.  Targeting outliers is a quickwin and wont solve your problem. Also split it by call type if you can and also by area – it could be one particular team causing the high variation you are seeing or a site even?  We did this approach and successfully highlighted ACW and Hold were out of control and ran solution workshops to understand why – stripped of 51 secs in a blink…..

    0
    #171837

    pise
    Participant

    Hi Justin  ,
    The way you have calculated is perfectly fine and is showing an improvement in your process.You need to follow the basics of statistics that for negative sigma , you need to check area under curve for negative values only and since at Z= -1.13(12.97% is your defect) whereas at Z= -1.27 (10.2%) is the defect level .Thus showing improvement in process.
    So cheers for your hard work , you have reduced defect probability by 2.7%.
    Cheers
     

    0
    #171838

    pise
    Participant

    Hi Justin  ,
    The way you have calculated is perfectly fine and is showing an improvement in your process.You need to follow the basics of statistics that for negative sigma , you need to check area under curve for negative values only and since at Z= -1.13(12.97% is your defect) whereas at Z= -1.27 (10.2%) is the defect level .Thus showing improvement in process.
    So cheers for your hard work , you have reduced defect probability by 2.7%.
    Cheers

    0
    #171842

    Sridhar Sukumar
    Member

    Hi,
    N=50 in both cases, so you have not removed 22 points in the second analysis, probably taken a different set of 50 data points. pls check
     

    0
    #171846

    Vallee
    Participant

    Please read his statement. This is the same data, the same 50 points but he transformed (converted almost half to the old mean).

    0
    #171847

    Vallee
    Participant

    You need to read through all the posts and his data and then will realize that he broke too many assumptions and parameters. You are incorrect in your statement.

    0
Viewing 20 posts - 1 through 20 (of 20 total)

The forum ‘General’ is closed to new topics and replies.