iSixSigma

opportunity

Six Sigma – iSixSigma Forums Old Forums General opportunity

Viewing 18 posts - 1 through 18 (of 18 total)
  • Author
    Posts
  • #30584

    sudhir manglick
    Member

    Can anyone suggest me how to define no. of opportunity for this case.
    I have a process where i have to monitor the time required to complete the process.
    I have defined that if the process is completed in 0 – 2 days, this equals to 0 defects. If the process completes within 3-4 days, this equals to 1 defect. If the process completes within > 4 days, this equals to 3 defects.
    Now should I consider here opportunities as 1 or 3 (max no. of defects that can accur.
     

    0
    #79776

    Mike Carnell
    Participant

    Sudhir,
    The criteria you described would be at the end of the process as a customer would see it. When you take the customers perspective the opportunity count is almost always 1. You either deliver it perfectly or you don’t.
    Good luck.

    0
    #79777

    Mike Carnell
    Participant

    I hedged by bet using the term “almost always” because I really don’t like to do the follow-up posts where they are either a hypothetical situation or something that gets conjured up and is so far down the Pareto nobody should care.
    I have never seen the customer experience legitimately go beyond one chance to do it right the first time but that doesn’t mean it doesn’t exist.
    Good luck.

    0
    #79782

    Ganapathi
    Participant

    Dear Sudhir,
    Assume Opportunity is 1. Defect, as per your definition, can be 0 or 1 or 3. Then DPMO works out to be 0 or 1 million or 3 million.
    Assume Opportunity is 3. Then DPMO works out to be 0 or 0.33 million or 1 million.
    You may understand these numbers of DPMOs are not really helpful since they are so discrete with wide gaps in between. For meaningful DPMO numbers, D should be far less then O. Then DPMO will be in tens or thousands (as used in sigma calculation) which is easy to monitor.
    I trust this helps you.
    Ganapathi

    0
    #79941

    Ramesh Sachdeva
    Participant

    Sudhir,I suggest that both opportunity and defects should be taken as 1. From Customer point of view any delay is a defect.

    0
    #79942

    DANG Dinh Cung
    Participant

    Dear Mr. Ramesh,
    A client only knows you are on time or not. I suggest the following procedure :
    1. If you are on time you have 1 and if you are not on time you have 0 (that means you fail to be on time).
    2. You count the the number of failures during a unit of time (a day, or a week).
    3. You use the p-type control card.
    Regards,
    DANG Dinh Cung,[email protected]
     
     

    0
    #79944

    Carr
    Participant

    It seems to me you are trying to converting a continuous data to discrete through counting. Isn’t capturing the time and set 0 as LSL and set 2 as USL a better way? Anything outside the boundary is defect. A continuous data will generally give you more information on your process than discrete data. With all the time you collected, you can run a Process Report to get sigma value if you have minitab.

    0
    #79951

    Tim
    Member

    Why are you using the DPMO method to calculate sigma?  Why not just use time (hours or minutes) as your continuous variable and calculate your sigma using continuous methods?  You would need to determine your process entitlement (i.e. How much time should the process take?) and then use this to calculate sigma using continuous methods. 

    0
    #79954

    James M. Hollingsworth, MHA
    Participant

    I believe the simplest and most correct way to count this is:
    0 to 2 days equals ZERO defects.
    Greater than 2 days equals ONE defect, period.
     
    From the customer’s perspective, anything processed greater than two days is a defect, whether it is three days or thirty. Defects weighted based on age distort the Sigma equation.

    0
    #79955

    john beaudoin
    Participant

    The data you have is perfect for a histogram.  I suggest you make a histogram with number of days to complete the process on the x-axis and number count of occurrences on the y-axis.  With a statistical package, you should be able to also calculate the Mean and Standard Deviation of days to complete.  Also with a statistical package using Cpk analysis with a 2 Day Upper Spec Limit (Note:  for this process, do not use a Lower Spec Limit as it will make your data wrong, assuming 0 days is good)  This Cpk data you get will tell you the Defects per Million opportunities, assuming your process is similar to a normal distribution.  If not, you can use another method of random sampling and looking at the means of the sample, that will force a normal distribution.  If you don’t have a software package, you can calculate a value of Z for the table in a statistics book (Note Z=Your (2 Days – Mean ) all divided by the Standard Deviation.  Looking up this value of Z will tell you the probability that your process will yield days of 2 or less, thus telling you the probability of defects.  You can use this probability to convert to defects per million.  You can use the same process to find the probability that the days will be between 2 and 3,etc. by calculating a Z value and probabillty of less than 3 days and subtracting the probability found for 2 days, etc.  This data would mean more to you than rating a defect worse the longer the days go.
    E-mail me if you don’t understand or have the software to do such an analysis.  Excel has a Z-Value formula, mean, and population standard deviation formula, which you can use to get these values.

    0
    #79958

    JAD
    Participant

    Continuous Data Approach:
    Another approach to measure the actual pain of the process by first refining the measurement system to measure in hours or minutes.  The current measurement system of measuring in days does not pass discrimination rules.  A standard way of estimating this is by utilizing the factor of 10 rule: (USL- LSL)/10 = minimum increment size.  If you collected data within 8 hour shifts, USL – LSL = 16 hours, assuming that the LSL=0.  Therefore, your data collection system needs to be updated to collect data in 1.6 hours increments or to the nearest hour (rounding down). 
    Just as a reference, the number one postal carrier for on-time performance is Fed-Ex.  How do they do it?  They measure to the minute.  In addition, if the overall goal is to improve performance, eliminating late items, you will need to not only measure how late but the reasons for why they are late.
    Data Analysis
    Your data collected will not be normal.  If it is, I would question it.  It will probably be Weibel shaped.  Depending upon the extent of how many special causes (really late events) will determine if you will be able to classify the data as Weibel.  I would suggest you try to first understand your special causes and eliminate them (by working on them first) prior to working on the main process (however, do not remove this data for the PC calculation).  It is those events that are most unsatisfied and costing you the most money.  Then I would work from the right to left, eliminating processes and causes until you are able to get the process under control.
    As far as calculating PC, I would try utilizing a Box Cox transformation (Minitab) on the data and spec. limits to get an estimate of the PC.  This transformation tries to make the data to look normal so that the standard (“Normal-based”) PC macros can be utilized.  If you have a large number of special causes (outliers), your data will not transform to a normal distribution.  If this occurrs, I would suggest that just count the number of data points within and outside the spec limits to estimate PC.
    After doing all of this, I would say that you probably pretty well understand the process.  I would not spend anymore time or effort on calculating PC.  I would spend the time on understanding what are the causes for times above the USL, starting from the latest to the earliest.
    I hope this helps.

    0
    #79959

    RT
    Member

    Be careful using Weibull for the CP analysis it assumes an increasing or decreasing defect rate based on time or cycles.

    0
    #79963

    john beaudoin
    Participant

    The factor of 10 rule does not apply here.  A lot of people make the mistake and assume that since the process cannot possibly take less than 0 days to perform that there is some artificial lower spec limit of Zero.  This is absolutely not true.  There is no lower spec limit because the probability data is based on a normal distibution curve and you need to capture all of the area under the curve to the left of your upper spec limit.  If he plots this data, the curve will encompass negative values, and to capture that area, you can’t use zero or you will be lopping a portion of the curve off, which will damage your data.  You have to keep in mind that some of his data may be in the 7 day range, and so forth.  If the Standard Deviation is more than a day, and the customer only cares about 2 days (for example, the product may need to ship by a certain time each day and it is only important that the product make the shipment time whether it was done 5 hours earlier or not, it still won’t ship until the carrier makes there pick-up time)
    Note the Weible shaped data is irrelavent as you can use random sample averages to get a normal distribution. (assuming this operation is performed several times a day and not once a day). 

    0
    #79979

    Mike Carnell
    Participant

    John,
    Normally I really like the answers you put together. This one I really have to disagree with. Because we can calculate something doesn’t mean we should calculate it. You get one opportunity to deliver something on time – from the customers perspective. If we go through all the other analysis what additional information (not data) do we have?

    0
    #79983

    Mikel
    Member

    Wow John, answer the guys question, not something else.
    The opportunity count is 1.

    0
    #80003

    john beaudoin
    Participant

    Mike, you are right in that you either hit the 2 days or you don’t.  In the transactional world, it would be helpful to know not only that you fail to hit 2 days x% of the time in a given sample, but you want to know the confidence interval of the sample at 95-98% and the probability that you will continue to perform at 2 days or less, you can also calculate the probability that you will be 3 days and less, 4 days and less, etc.  Sometimes this information is useful when making a commitment to a customer as we have found that variation reduction to a customer is less upseting to them than telling them you will meet 2 days and you don’t (This is obviously short term as once we reduce variation, we want to shift the mean through process improvements).
    The Good/Bad by itself doesn’t tell you how much you have missed the target.  The original question posed was interested in having some kind of weighting to penalize items greater as they missed the target by a greater amount.  I was simply suggesting that probabilities might be a better approach to give management an idea of the probabilities at different failure levels.

    0
    #80004

    Mike Carnell
    Participant

    John,
    If I were looking at this from my side as a supplier – I agree I would use more than a go/no go type metric. That is a different question.
    The question in the post was how many opportunities. It is still one. You get one chance to satisfy the customer. It isn’t just delivery either. It is one opportunity to deliver on time, package correctly, send the correct paperwork, right product, product works, etc.
    My anaswer to the original question about hitting the 0-2 days is one opportunity.

    0
    #80005

    john beaudoin
    Participant

    I never disputed that.

    0
Viewing 18 posts - 1 through 18 (of 18 total)

The forum ‘General’ is closed to new topics and replies.