Exponential effort…
Six Sigma – iSixSigma › Forums › Old Forums › General › Exponential effort…
- This topic has 4 replies, 4 voices, and was last updated 13 years, 5 months ago by
The Badger.
-
AuthorPosts
-
December 10, 2008 at 12:05 pm #51487
The BadgerMember@The-BadgerInclude @The-Badger in your post and this person will
be notified via email.Hi all,
My process sigma improved from 2.5 to 2.7 this year following a rigorous series of incremental Green Belt improvment projects, yey!
Does this mean that next year, to improve my process sigma from 2.7 to 2.9 I will have to double my efforts, and then double my efforts again the following year to achieve 3.1 sigma and so on…?
The Badger0December 10, 2008 at 2:59 pm #178526Improvement projects are only as effective to improving the sigma level as the degree of input and scope of the project. Their is no relationship to sigma level and annual improvements…only that the better you get the tougher it is to continue to improve.
But it is the challenge that gets the juices going and it seems like you have started well Keep doing what you are doing and perhaps elevate to BB projects instead of just GB projects.0December 12, 2008 at 5:05 am #178634
Jonathon AndellParticipant@Jonathon-AndellInclude @Jonathon-Andell in your post and this person will
be notified via email.I’m afraid there are some assumptions in your reasoning.
First of all, we have to know whether you are dealing with continuous or discrete data, because of the alleged 1.5-sigma “shift.” In most but not all cases, your defect rate would have fallen by 44% of you use continuous data, and by 27% if you use discrete. (Don’t get me started on the whole sigma shift baloney…)
For the sake of simplicity, let’s assume that you use continuous data. To step from 2.7 to 2.9 sigma you will have to reduce defects by 46%; to reach 3.1 sigma another 48%. It’s not linear, because of the math associated with the area under the normal curve.
Also, you assume that effort alone will reduce your defect rate, when in fact there is a lot associated with the process and its parameters that controls how much improvement is feasible.
I hope this rant didn’t confuse you.0December 12, 2008 at 11:14 am #178641Badger,
From 160,000 DPMO to 115,000 DPMO? Sounds like you have reduced defects to a moderate extent. (If I used the handy dandy Sigma calculator correctly).
Do your defect paretos follow the 80/20 rule? Usually there is low hanging fruit at the beginning, and the smaller-incremental improvements are achieved well into your continuous improvement efforts.
You could be suffering from low-opportunity-count-itis. A deadly six sigma diesease :-)
HACL0December 12, 2008 at 12:12 pm #178645
The BadgerMember@The-BadgerInclude @The-Badger in your post and this person will
be notified via email.Thanks for your reply. I agree with, and understand all of your points.
Quick-wins came and went showing big step changes (the process was pretty grim before that!) and the past 12 months has been about systematic GB and BB activity yielding the 2.5 to 2.7 result.
I’m reasonably comfortable with the influence that the normal curve has on the % improvements each new improvemnt will make given the data type (continuous in this case).
I quess ehat I’m trying to get my head around is that the process is targeted to achieve 3.0 sigma by the end of 2009 and my process owner is aksing me whether the approach to improvements will get us there. I’m trying to help him undertstand we need to do more than what we did before (over a similar period) and in fact the imrpovement will be less for teh same effort.
0 -
AuthorPosts
The forum ‘General’ is closed to new topics and replies.