iSixSigma

GP

Forum Replies Created

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #120235

    GP
    Participant

    Hi Karem,
    Since you have now completed your project and only need to arrive at the costing of the whole thing it should not be very difficult. I agree with all that you should have taken initial data for baseline reference to understand where you have made a difference but after the whole thing there is no point wasting time pondering over the past. My suggestion is as follows.
    Please find out from your finance department about the cost of producing the steam in the first place before you started the project. that will be some value because the requirement will be of a certain volume of steam at a certain pressure required by all dependent process.
    Now after you have plugged all the defects in the system, go back and check what is the same cost of producing the same steam now.
    Then also calculate the amount of money you spent in various activities while fixing the system might also include the cost of training in Six-sigma and rest of the fixing stuff…
    Then the savings will be = Old cost of production-new cost of production-cost of implementation.
    This value will be the money you have saved to the company.
    Use the standard calculator for calculating the defects Vs total population by making the simple assumption that previously you did have 1125 leakages and now you have whatever left-1125. The before and after above is enough for you to calculate.
    Please let me know if you need more info or help on this.
    -GP

    0
    #56656

    GP
    Participant

    Hi There,
    I am also from the Tech support Contact Center Industry and have come across the same problem in the past with FCR. What matters here is the FCR timeframe. As we all know it is not possible to resolve maybee 50% of the total of 20% hardware faults in one call or a two day time frame. So we changed the time frame to 7 calendar days and got more critical information on where the oppurtunity really was. We were also running at approx 90% FCR but when we changed the time frame we dropped to 50%. so we then did some RCA on this FCR failures and improved it with various actions and tools without any additional investment.
    After that when we took the 2-day FCR again just to see where we were as compared to the previous process we realised we were at 95% with FCR giving a R^2 value of 0.95 to CSAT. So maybee you could also work on the same lines to get a better hold over your process.
    Your client could be satisfied with the 85% at 2-day levels but are your customers really satisfied?

    0
    #92724

    GP
    Participant

    While measuring any process for failures or defects, it is essential to have clarity on what exactly we are going to measure and improve.
    Before going into technicals, let us have the ground rules clear:
    1] Always compare apples to apples. In case you want to express the results with number of documents as the denominator, the numerator cannot be number of errors over the sample. It has to be number of documents with errors only. This gets us to the concepts of “defects” and “defectives”, discussed below.
    2] Metrics used for measurement have to be consistent over time and across geographies etc. so that all talk the same language at all times.
    Now, let me request you to think about your process in terms of “defects” and “defectives”.
    When you say that you have 30 errors, please be sure that you are referrring to the total number of mistakes that have occurred in keying in those 300 docs. If it is so, we are talking about 30 “defects” in the process when sampled for 300 docs. Going by the defects logic, what you say is perfectly all right, and you probably need to translate that into a metric which has more intuitive appeal, i.e. the DPMO or you can use an attribute sigma calculator. Simple calculations with your figures give the following results:
    DPMO: 5263,     Sigma level: 4.06
    It will be pretty easy to also convert your historical data to the above metrics to arrive at a baseline.
    In order to guage the impact of your improvement actions, you can perform the same calculations and compare. Here, lower is better for both the above metrics. This was about defects.  
    On the other hand, “defectives” means the number of documents which were defective. Therefore, you need to find out in how many documents these 30 errors were found. Once you have this data, you can express the same as a % to the total sample size, to get a metric which would make some sense, however not as much as DPMO and the sigma level.
    The choice depends on what is more important to your customers! Using both the above concepts in tandem also can lead to good insight into the process behaviour. e.g. in case a particular data element is going into error more often than the others, then may be the data entry operators need to be educated more about this data element, or there may be a system bug which causes repeated errors for the same data field. You can also easily construct a matrix in which you plot data element vs. operator to really know what is happening. This kind of insight may not come if you go the “defectives” way!
    Hope this helps!
    GP.

    0
Viewing 3 posts - 1 through 3 (of 3 total)