iSixSigma

Process stability

Six Sigma – iSixSigma Forums Old Forums General Process stability

Viewing 7 posts - 1 through 7 (of 7 total)
  • Author
    Posts
  • #31334

    Summerfield
    Participant

    Our process is not stable initially during a start of a run .If data is taken in the initial stages, the process is not capable, as one would expect .
    How should the parts produced during this unstablr period  be treated . Should it be inspected 100 % or is there a way to cut down on inspection by sampling . Any ideas ????.
    Since at every run the part produced is different , there is no defnite time period when the process stabilises.To work on a project so as  to fix the problem seems to be a very expensive option .

    0
    #82476

    Robert Butler
    Participant

       The description of your process gives the impression that you never produce the same part twice (“Since at every run the part produced is different “).  However, whether you produce the same part intermittently or actually never produce the same part twice, you do have prior measurements of the time it takes to stabilize a production run.  With these times you can determine the distribution of the time to stability and thus identify the time at which X% of your processes will have stabilized.  You could use this number as a guide for making decisions concerning changes in sampling procedure. 
      100% inspection during a startup is certainly the safest and most conservative approach. It is possible to use other sampling methods if you have some sense of process behavior during startup. However, if you really don’t make the same part twice or there is no obvious relation between the observed extremes in product properties from one run to the next or if startup extremes can be expected to exceed the required spec limits then it would seem that the only way to avoid shipping bad product would be to do 100% inspection.

    0
    #82477

    Chip Hewette
    Participant

    An unstable process can still operate within customer specifications.  Does yours?  It sounds like you are worried that some parts will not be in specification.
    100% inspection is not always effective.  Sampling is not appropriate to find random errors.  Think of the statistics.  If we measure every tenth part, we have a 1 in 10 chance of finding a random event.  We could close our eyes and say “everything in between these two parts must be OK” but I sure wouldn’t want to be the customer!
    I would suggest the following:
    a.  Measure the time when the process stabilizes for the next thirty processes.  Calculate a mean and a lognormal standard deviation for this time data.  Evaluate.  Is the set of numbers lognormally distributed?  If so, what is the maximum?  Can you infer that parts produced after this maximum time are from a stable process?
    b.  Many fundamental laws of the universe may be forcing the initial instability, but consider a DMAIC project on just this facet of production.  What are the likely causes for instability?  Can you control these at all?  Can you reduce the time of instability?
    c.  Inspect the parts until stability commences!

    0
    #82503

    Summerfield
    Participant

    Thanks for suggestions . I will try these and see what I get

    0
    #82505

    PA
    Participant

    Chip
    Can u further explain . Also why log normal .

    0
    #82509

    Chip Hewette
    Participant

    First, I hoped that studying the time until stability occurs would allow the process owner to set a simple, factory-proof work instruction that parts prior to this time were to be segregated, inspected, re-inspected, scrapped, or whatever.  Factories are not always the best place to be wishy-washy or unclear with work instructions.
    Second, with regard to time to stability, it is often best to transform the data using a logarithmic approach.  Time is of course bounded by zero (unless you watch too much Star Trek).  So, if you have a set of time data it is naturally skewed.  If one calculates a standard deviation based on skewed data, interpretative errors could be made.  The log transform allows the data to be evaluated for normality.  If the data were lognormally distributed, one could infer that within the realm of sampling the process was full of many random event that made the process unstable.  If, however, the data showed abnormality the process owner could seek to make improvements by eliminating special causes.
    Third, I hoped that DMAIC could provide an answer as to the instability.  If it is unstable at the beginning, why would we assume that it is always stable later?  If we don’t know what makes it unstable, how can anyone say “Oh, we just know it is stable now.”
    My recommendation in these situations is to stand in front of a mirror and pretend you are talking to the end customer.  If you can’t explain it to that person, seek to make the process understood.

    0
    #82522

    PA
    Participant

    Chip ,Thanks for your clarification

    0
Viewing 7 posts - 1 through 7 (of 7 total)

The forum ‘General’ is closed to new topics and replies.