AHT Sigma Capability question
Six Sigma – iSixSigma › Forums › Old Forums › General › AHT Sigma Capability question
 This topic has 19 replies, 12 voices, and was last updated 14 years, 7 months ago by Vallee.

AuthorPosts

April 30, 2008 at 11:45 pm #49981
I’m working on a SixSIGMA Green Belt project on AHT. The AHT target is 6 minutes. After analyzing our initial AHT data, I decided to manage outliers (agents with the highest AHT.) The Sigma Capability actually went down after doing this, and I’m not sure why. Here’s the data:
Initial AHT data:
Mean = 8.63 StdDev = 2.32
VSF = 1.61USL = 6LSL is Not DefinedSigma Level = 1.1339Sigma Capability = .37DPM = 871,572
Yield = 13%N = 50
Managed Outliers data: Managed 22 of the 50 agents’ AHT down to 8.63 (the mean). After doing so, you can see that the Mean, StdDev & VSF all improved…but the Sigma Capability & yield did not. This is where I’m stuck. Any ideas on what is causing this result? Thanks for the help. Much appreciated! :)
Mean = 7.8091StdDev = 1.4279
VSF = 1.19USL = 6LSL is Not DefinedSigma Level = 1.2670Sigma Capability = .23Cpk = .4223Cp is not availableDPM = 897,416
Yield = 11%N = 50
0May 1, 2008 at 12:38 am #171601
TaylorParticipant@ChadVader Include @ChadVader in your post and this person will
be notified via email.Justin, A quick glance just proves your process is too out of control to even tell what is happening. Evening running the data in a different sequence will probably yield a different result. Go back and look at your data collection and focus more on the DPM to start with.
0May 1, 2008 at 12:59 am #171602
ValleeParticipant@HFChrisVallee Include @HFChrisVallee in your post and this person will
be notified via email.Justin,You do understand that you have completely changed your distribution… right? If 22 of the 50 agents are out of your usl of 6 why would you covert them to the mean? Besides when you attempt to normalize your data you would convert them to closest non outlier position not that I would do it with close to fifty percent of my sample. You may need to check if you have a bimodal distro… otherwise than that it is out of control. My suggestion respectfully is to stop messing with the numbers and understand the process you are measuring.
0May 1, 2008 at 2:48 am #171603This is easy. Your target is obscene. AHT should be
0May 1, 2008 at 6:59 am #171609
ValleeParticipant@HFChrisVallee Include @HFChrisVallee in your post and this person will
be notified via email.Stan,Regardless of the question of AHT target value, Justin does not understand what his numbers really mean and his influence on the output. Maybe I have been out of the classroom too long but since when do we convert 40% of the data that are outliers into the mean?
0May 1, 2008 at 1:02 pm #171615Stan, This is funny… Telling someone what an appropriate AHT target is without knowing thier industry or application? Come on…. – M
0May 1, 2008 at 1:08 pm #171616J – I agree with some previous posts here… Assuming your calculations are correct and your using a stats program to do the analysis then there likely is an issue with shape of your distribution. I would check the normality and stability of the data and use a larger sample size. Is your samples size of 50 AHT for one day for 50 agents? A week? A month? Also, are all the agents taking the same types of calls? If one group is taking service calls, and others are taking sales calls the distribution is going to be crazy… – Cheers.. M
0May 1, 2008 at 5:42 pm #171648Justin,
Remember the calculation: (USLmean)/s.d. The sigma level is improved a bit by lowering the mean, but it’s still negative because mean>USL. Since you also lowered the s.d., you’ve made a larger negative number, i.e., a worse sigma level.
In layman’s terms, although you improved the mean, you kept it above the USL, and you have simultaneously made the distribution more consistently above the USL.0May 1, 2008 at 7:12 pm #171664
MrMHeadParticipant@MrMHead Include @MrMHead in your post and this person will
be notified via email.From BC: “In layman’s terms, although you improved the mean, you kept it above the USL, and you have simultaneously made the distribution more consistently above the USL.”
That’s what I was trying to think of!:
It has become more precisely wrong – less chance of accidently being right. (and there’s something about a blind squirrel in there too)
But I still struggle a bit with the concept of a negative s.d. Does that lead to an area multiplied by the sqrt of 1?0May 1, 2008 at 7:29 pm #171665MHead,
There are no negative standard deviations in this example. The only thing negative is the sigma level. Sigma level is not a standard deviation; rather, it is the distance of the mean from the USL expressed in units of standard deviations. In other words, a sigma level of 1.2 means that the mean is 1.2 times the standard deviation away from the USL, and the negative means that it’s OUTSIDE the spec.
Just like I can express the height of a building in feet, meters, or number of Coke bottles, I can express the distance of the mean from the USL in the original unit of measure, or in number of standard deviations. Standard deviation itself is still positive.
BC0May 1, 2008 at 8:46 pm #171667
Outlier, MDSBParticipant@Outlier,MDSB Include @Outlier,MDSB in your post and this person will
be notified via email.Justin,
BC is dead on here. Until you move the Mean below the USL, you will not see any significant improvement in sigma level. The nugget of good news is that if your control chart suggests that your process is now stable, you can focus on ways to move all of your operators to shorter calls rather than having to chase down and investigate all the outliers. So on the Accuracy vs. Precision scale, you have made your process more precise (but still missing the target) so now you can aim everyone at the target.
Good luck.
0May 1, 2008 at 8:56 pm #171668
ValleeParticipant@HFChrisVallee Include @HFChrisVallee in your post and this person will
be notified via email.How is his process stable when he moved 22 of the 50 data points to the old mean? We still don’t know as stated before if this could be a bi modal or any other type of pattern. He changed the numbers not the process and it appears he also violated the rules for outliers in the first place. BC was dead on with his statement of USL and what happened to the numbers which was also stated previously.
0May 1, 2008 at 10:00 pm #171669
TaylorParticipant@ChadVader Include @ChadVader in your post and this person will
be notified via email.Justin
I have taken another look at this after reading some of the comments. First of all, by managing your 22 data points you have taken normal data (assumption) and made it nonnormal: this is the reason for the yeild calculation deminishing. My suggestion here, just for sake of analyzing the data, run a test for normality with the original data set, then set your target as a centerline, and making your upper spec limit =12 minutes. Run a test for normality and then determine if you have any outliers, special cause, etc. I believe you will see the same data points as special cause if any. Basically all I’m asking you to do here is put your data set in a form you can analyze. The current setup obviously has more than 75% of the data points above the spec limit. very difficult to analyze. Also run Cp on your original data set. This will give you a hint as to what you could be capable of if your process was centered.0May 1, 2008 at 11:34 pm #171672
Marlon BrandoParticipant@MarlonBrando Include @MarlonBrando in your post and this person will
be notified via email.There is no negative SD
0May 8, 2008 at 8:53 am #171834I recently ran an AHT improvement project – the data was not normal but to understand it more we broke the component parts of AHT down and ran them through control charts – this was to understand which part of AHT was in or out of control. Targeting outliers is a quickwin and wont solve your problem. Also split it by call type if you can and also by area – it could be one particular team causing the high variation you are seeing or a site even? We did this approach and successfully highlighted ACW and Hold were out of control and ran solution workshops to understand why – stripped of 51 secs in a blink…..
0May 8, 2008 at 9:56 am #171837Hi Justin ,
The way you have calculated is perfectly fine and is showing an improvement in your process.You need to follow the basics of statistics that for negative sigma , you need to check area under curve for negative values only and since at Z= 1.13(12.97% is your defect) whereas at Z= 1.27 (10.2%) is the defect level .Thus showing improvement in process.
So cheers for your hard work , you have reduced defect probability by 2.7%.
Cheers
0May 8, 2008 at 9:58 am #171838Hi Justin ,
The way you have calculated is perfectly fine and is showing an improvement in your process.You need to follow the basics of statistics that for negative sigma , you need to check area under curve for negative values only and since at Z= 1.13(12.97% is your defect) whereas at Z= 1.27 (10.2%) is the defect level .Thus showing improvement in process.
So cheers for your hard work , you have reduced defect probability by 2.7%.
Cheers0May 8, 2008 at 3:14 pm #171842
Sridhar SukumarMember@SridharSukumar Include @SridharSukumar in your post and this person will
be notified via email.Hi,
N=50 in both cases, so you have not removed 22 points in the second analysis, probably taken a different set of 50 data points. pls check
0May 8, 2008 at 9:30 pm #171846
ValleeParticipant@HFChrisVallee Include @HFChrisVallee in your post and this person will
be notified via email.Please read his statement. This is the same data, the same 50 points but he transformed (converted almost half to the old mean).
0May 8, 2008 at 9:33 pm #171847
ValleeParticipant@HFChrisVallee Include @HFChrisVallee in your post and this person will
be notified via email.You need to read through all the posts and his data and then will realize that he broke too many assumptions and parameters. You are incorrect in your statement.
0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.