iSixSigma

Justification for Sampling Frequency

Six Sigma – iSixSigma Forums Old Forums General Justification for Sampling Frequency

Viewing 16 posts - 1 through 16 (of 16 total)
  • Author
    Posts
  • #31114

    weisusu
    Member

    I have recently implemeted a new process in which generates a critical characteristics of < 1.33 Cpk. The sampling frequency is now at 3 parts/hr (random, no good justification for this). My customer is trying to get me to increase the sampling frequency - either increase the sample size/hr or taking sample more often. I have some historical data accumulated for the testing by now. My question is: Is thee any statistical analysis that I can perform so that I can persuade my customer that current sampling frequency is adequate? I know a tool in Statistica that I can use, but now sure how to use it. Thanks.

    0
    #81785

    Michael Schlueter
    Participant

    Dear Weisusu,
    How many parts do you produce per hour? 300? What is your rational to avoid a higher sampling rate ?
    Let’s focus on an other important aspect, first. I suppose you want to deliver good quality, don’t you? There is a strong link between sample frequency and deliverd quality level.
    First: How to obtain the desired quality level on-line ?
    Can you adjust your process, when samples drift away from their target value ? Then you can monitor the variation in your process and readjust it back on-target, once samples exceed certain control limits. Usually control limits are within spec.-limits, which are within process-limits. – I suggest identifying a strong adjustment factor in your process. Then you can actively influence your process rather than watching it doing bad things (and discussing it endlessly).
    There are 2 ways I know to establish control limits:

    Bothe’s Pre-Control approach
    Taguchi’s Online Quality Engineering approach
    Bothe’s approach seem to be independent of sample frequency; besides rules you should follow for obtaining good SPC data, may be (statistical arguments). Taguchi’s approach balances (economical arguments):

    sample frequency
    measurement cost
    adjustment cost
    desired quality level (allowed tolerance)
    quality loss (extra cost bad products incure at your customer)
    measurement accuracy
    lead effects
    Bothe pragmatically takes the whole spec-range (for 2 sided tolerances) and puts control limits at 1/4th of the spec-limits; i.e. half of the spec-range forms the green “good” range; 1/4 + 1/4 form the yellow “bad”-range. This way he compromises on two risks:

    either having contol limits set too narrow, while process is good
    or having control limits set too far apart, while process is bad.
    Taguchi’s rational is: it all depends on the monetary loss your customer (or society) can take. Ideally this loss would be close to 0$.
    E.g. an out-of-target product can incure a high monetary loss when you produce medical equipment, which monitors a patients life. A malfunction, which is still in-spec., might kill the patient in the end: loss is very high. So this manufacturer can justify economically to monitor his process frequently, adjust it frequently and use tight control limits for this purpose. The goal is to find a balance which minimizes total monetary quality loss – satisfying all parties involved.
    E.g. out-of-spec. conditions for the length of matches from a match maker may be no big deal at all (nobody gets hurt, quality loss is very low). In this case it is economically justified to sample rarely and to readjust rarely; i.e. using wide control limits. The goal is to find a balance which minimizes total monetary quality loss – satisfying all parties involved.
    In other words: Taguchi’s answer to what is “good” and what is “bad” is: it depends, but you always want to minimize loss. So your specific situation determines your specific sample frequency. Taguchi’s formulas to calculate all relevant quantities are a bit more complicated than Bothe’s pragmatism.
    Bothe presented examples, where the Cpk was dramatically improved after introducing his control limit settings. I’ve seen several industrial case studies from Japan, where Taguchi’s online QE approach led to 2 or 3 subsequent improvement cycles. This is quite normal:

    the process variation becomes much better
    new factors limit capability now, like measurement accuracy
    quality level improves, monetary quality loss decreases
    Hope this gives you another route towards an answer to your question.
     
    Michael Schlueter
     

    0
    #81791

    John J. Flaig
    Participant

    Michael,Can you explain the statement you made in your
    response, “Usually control limits are within
    spec.-limits, which are within process-limits.”?It seems to me that control limits are process
    limits. Dr. Wheeler even calls control limits
    natural process limits.Regards,
    John
    John J. Flaig, Ph.D.
    Managing Director
    Applied Technology (http://www.e-AT-USA.com)

    0
    #81797

    Hemanth
    Participant

    Is your customer recieving rejected parts? If yes, then you need to look at the sampling frequency, secondly is your process in control, your cpk also worries me..have a look at it, if its not a problem to measure increase your frequency…unfortunately I havent come across any formula for identifying sampling frequency for control charts…

    0
    #81801

    Jackey
    Participant

    I am very interested in two approaches your introduced. Do you please tell me where can I get more details about them?

    0
    #81809

    Michael Schlueter
    Participant

    John,
    Thank you very much for your interesting question. May be the same word is used to express different things. I think rephrasing “control limit” into “readjustment level” makes it more evident :

    we introduce process feedback.
    In electronics feedback is widely used to make circuits stable and insensitive against external variations, as imposed by wafer fabs, thermal drift, ageing or applications – or even worse: by users.
    Written in pseudocode the feedback process is (/* comments */) :

    forever  take_samples {    /* how frequently? Weisusu’s question */

    if (sample_measurement is within readjustment range) {

    /* do nothing: process is ok */
    } else {

    while (sample is out_of_readjustment_range) {

    readjust_process_on_target
    }     /* incures readjustment cost */
    }      /* incures measurement cost */
    }      /* each process incures monetary quality loss to society */
    It makes little sense to widen the adjustment range to the natural process limits, when our objective is to deliver on-target quality. (It may make sense for other purposes, though.)
    Dr. Taguchi didn’t feel comfortable to deliver within-spec quality, either: products close to the spec-limit (ok) do not differ much from products slightly outside the spec-limit (not ok), he argued. While there is a significant difference between an on-target delivery and a close-to-spec-limit delivery. – Think of an on-target-hot pizza delivery in contrast to a still-warm-enough pizza delivery. Certain readjustment levels can make sense to guarantee close-to-hot delivery , unless there are other alternatives (i.e. better system/process designs).
    It is important to identify one (and only one) strong factor which can be used for readjustments over the whole operation range. It is convenient and it reduces risk. For example it is convenient to steer a car by just one wheel. Having to adjust the wheel, the left and the right engine and the brakes all at the same time (like in some airplanes or tanks) may even lead to fatal results – just to make a turn. Axiomatic Design people will give you even worse examples of real-life products which actually do behave risky.
    Accurate results can be obtained with inaccurate processes by introducing this sort of continuous process feedback For example, when we readjust a mechanical watch at appropriate intervals it may became almost as precise as a digital one. On the expense of additional process feedback (well nowadays digital watches are even cheap enough). It is a matter of someones available resources which is more economically :

    staying with a bad process, but compensating by proc.feedback

    or redesigning the process for lower variation.
    Driving a car in a lane is an illustration for this kind of process feedback. The lanes borders represent the USL and LSL, respectively. We leave the steering wheel as it is, when we are far away from the LSL and USL; i.e. we drive in the middle of the lane: on-target.
    It would be a questionable idea to wait with adjustments of the steering wheel until we exceed the lanes borders (USL or LSL). Instead we do it a little earlier, e.g. while we are 1 or 2 feet within-limits. So we act like following invisible readjustment levels which reduce the lane by 2 or 4 feet. (Again, it can make sense to utilize the maximum available natural range, e.g. when we need to change lanes – when we want to conquer a different market segment with our process.)
    It does not matter if we change the steering wheel when we are within the save readjustment range, i.e. close to the lanes middle. However, we learned not to drive in zigzags’s : police may consider us drunk. – In a manufacturing process it is not very rational to change adjustment, when the process is within the rational readjustment range. Unecessary adjustment would at least increase cost. Why doing it? We will readjust when it is time to do so.
    Regards, Michael

    0
    #81810

    Michael Schlueter
    Participant

    Jackey,
    Thank you for your interest.
    For Bothe’s approach please refer to:

    World Class Quality, 2nd Edition
    Using DOE to Make it Happen
    Keki R. Bothe, Adi K. Bothe
    Amacom, ISBN 0-8144-0427-8
     
    For Taguchi’s approach please have a look at:

    Taguchi on Robust Technology Development;
    Bringing Quality Engineering Upstream
    Genichi Taguchi, Shih-Chung Tsai (Translator)
    Asme Press Series on International Advances in Design Productivity
    http://www.amazon.com/exec/obidos/tg/detail/-/0791800288/qid=1041513907/sr=1-5/ref=sr_1_5/002-8908735-8881620?v=glance&s=books
    Dr. Taguchi is a bit short in this book on the mathematics behind his ideas. However, you can get his basic message.
    Please share your experience with me, later.
    Regards, Michael

    0
    #81816

    Chip Hewette
    Participant

    First, congratulations on implementing control chart techniques.  These are valuable tools and you are on the right track.
    Second, much theoretical discussion aside, if the process Cpk >1.0 and Cpk < 1.33, the process is on the ragged edge.
    Third, one should create reaction plans for ensuring product quality based on the sample data.  What would you do if eight sample averages in consecutive order were above the calculated long run process average?  There are many other conditions that should raise the red flag and require a studied response.
    Fourth, the sample plan should not be based on the Cpk.  It should be based on an earnest desire to capture variation.  Sample subgroups should be of nearly similar production subgroups, and the interval between samples should be long enough to showcase other sources of process variation.  Increasing the number of samples would be very helpful in measuring the ‘within group’ standard deviation.  Decreasing the interval between samples might be a waste, and camouflage the sources of real variation.
    Fifth, if this is a new process, one should have at least 30 subgroups in hand before calculating an process limits, and at least 300 initial samples measured for this critical characteristic.
    Sixth, since you state it is a new process, please question if the historical data has enough variation ‘baked in.’  Are all shifts represented?  All operators?  All process inputs?  Chemicals?  Make sure you don’t use data from a limited viewpoint to make important decisions.

    0
    #81820

    John J. Flaig
    Participant

    Michael,Thank you for a very clear, interesting, and
    humorous explanation. Here is my impression
    of what is being recommended by Dr. Taguchi. The use of “action” limits inside the 3-sigma
    control a limit that justifies adjusting the
    process. A similar approach is used in some
    SPC texts and these are called warning limits
    and are frequently set at 2 sigma. When the
    process indicates instability based on the
    warning limits additional sampling is
    recommended and action is taken if instability is
    confirmed. This methodology is the foundation
    of adaptive process control. Dr. Taguchi’s approach sounds like it
    incorporates elements of statistics and
    economics to arrive at the “appropriate” action
    limits. This philosophy seems to agree with that
    of Dr. Shewhart and Dr. Tukey both of whom felt
    that control limits should be based on economic
    impact as well as statistical considerations. The combination of SPC and EPC (engineering
    process control) provides a very powerful tool for
    process management. However, the EPC
    models that I have seen do not incorporate
    economic impact and I feel as Dr. Taguchi does
    that they should. My only criticism of the Taguchi
    approach is that it is binary (i.e., we adjust when
    we exceed the “action” limit). I believe the
    appropriate econo-stat EPC model would
    provide the “optimal” process adjustment on a
    continuous basis. I’d be interested in your thoughts on this.Regards,
    John

    0
    #81823

    Chris Seider
    Participant

    Chip,
    Your advice on this problem is pragmatic and useful in its approach.  I agree with all of your statements, especially the last paragraph. 
    I would add a few thoughts.  First, I’ve seen no one confirm the measurement system’s ability to be precise.  How much of the < 1.33 Cpk is due to the measurement system?  If it's poor, then fix it (if possible), even with averaging techniques, and the Cpk should improve.
    Also, I might ask the individual with the first question to compare data on the short term (days, weeks, hours, minutes–depending on the industry) vs the long term.  If there is little difference (or less than the 1.5 sigma shift), then this could be an argument to NOT have to sample any more.  If no study has been done on the truly short term, then when it was completed, you might find sources of variation.  Just be sure to do a multi-vari approach and collect processs data along with the results.

    0
    #81844

    John J. Flaig
    Participant

    Chip,Very good advice on sampling. I only have one
    observation on your item#5. It is not necessary
    to wait for 30 subgroups to calculate control
    limits. Control limits can be calculated after the
    second subgroup using Dr. Quessenberry’s
    Q-charts. These limits become asymptotic to
    the standard control limits as the number of
    subgroups goes to infinity (actually they are very
    close even when n =30). I point this out
    because some processes produce very few
    units per week. So if you produce one unit per
    week you would have to wait for 30 weeks
    before calculating trial control limits (a lot can
    happen in 30 weeks). If you want to see an
    example of a Q-chart, checkout my website
    home page.Regards,
    JohnJohn J. Flaig, Ph.D.
    Applied Technology
    http://www.e-AT-USA.com

    0
    #81848

    Chip Hewette
    Participant

    Thanks for the additional info on Q-charts.  I guess the 30 subgroup rule comes from that asymptotic approach to limits calculated at an infinite number of subgroups.

    0
    #81929

    Michael Schlueter
    Participant

    John,
    Sorry for my delayed answer. Thanks for your appreciation of my explanation.
    Yes, you are right. Dr. Taguchi based his argument completely on an appropriate Quality Loss function. Statistics is visible only in one term as a certain estimate to a constant factor. His purpose was to motivate continuous quality improvement, because many people stopped too soon at good results – rather than on excellent results. His QLF is an estimation of the ‘real’ one. The sharper the process becomes, the lower the QLF will be, the smaller the deviations to the ‘real’ losses will be.
    I tried finding something about EPC you mentioned. Can you please guide me to more details?
    Probably it will be economically justified to use EPC in some cases, while it may be expensive in others. I like the idea of continuous adaption. However, there may be problems with it.
    Noise can be a problem in electronic feedback systems. Recall Nyquist as one way to estimate conditions for stability (open/closed loop gain, frequency responses etc.). Limiting bandwidth accompanies feedback systems many times – or it is intrinsically build in anyway. E.g. too fast loops can drive a system into instability quickly (think of your heating/cooling system at home or in your office: it will provide PID regulators, probably: time derivatives and time integration; it depends on the system what will be more stable).
    Noise in electronics is variability in processes. A bandwidth limitation is imposed by Dr. Taguchi’s online QE system, in fact. The sample frequency differs from the adjustment frequency. Which means, depending on the specific case, you adjust at a slower rate than you monitor bad changes (or vice versa).
    We can also question feedback system – or adaptive systems – for process control in general. Why? Let’s focus on the ideal process (just some TRIZ thinking here):
    The current process:

    provides good: it creates products within a specified range
    provides bad: it creates products off-target
    it creates scrap and complaints, hence cost
    Feedback or adjustment systems will add extra cost. The extra cost may be overcompensated by the gain in quality level. When the quality level has become excellent-excellent one day, we’ll stay with the extra cost of this extra-system.
    So the ideal process:

    creates only on-target products
    costs nothing, creates no further harm.
    The ideal process may never come to reality, but at least we can approach it step-by-step. To come up with a solution we can formulate a direction in TRIZ-speak (let’s tear down all our psychlogical inertia)  :

    we found the X-resource  (X: yet unknown, but available)
    which has the property
    to create only on-target products
    while eliminating (all) cost
    Well, TRIZniks now would reexamine all availabel resources and check if – or how good – each resource fits into this direction. In other words: whether or not the X-resource is already at hand.
    This process is much like looking for a specific book in a library. Go to a library and look for a book without defining its desired properties. You’ll probably won’t find anything for your purpose. But state its desired properties (e.g. it must be in-depth text, less than 100 pages, small-sized …) and you’ll probably will come up with half a dozen of valuable books. – Creativity works exactly this way: state, what you are looking for – go out – and find it.
    Now, there is one available resource which in fact fits into the desired properties of the yet unknown X-resource: the process ITSELF, nothing else:

    the process ITSELF (nothing else)
    has (to have) the property
    to create only on-target products
    while eliminating (all) cost
    Read it as an advocation for process improvement, process redesign, process robustness, process innovation, 12-sigma … on its extreme we’ll have just the on-target products WITHOUT any process at all (no cost, no harm, no complaint, no scrap, no transport …) – just the benefits.
    Impossible ? May be. But we can approach this vision step-by-step. Every 6-sigma success story should prove this process of iterative approximation towards the ideal process. – Which in turn means: you may be able to short-cut (or increase the effectiveness of solution of) some 6-sigma projects.
    Michael Schlueter

    0
    #82662

    Bruce Floyd
    Participant

    Interesting discission.  I am writing an artricle of this subject for a food magazine.  Would any of you care to be interviewed?  Please reply to my email address with telephone number and time you are available. (bfloyd7192 @ aol.com)No one mentioned the spread on the individual sample groups.  There was a time when if the range of the subgroups was outside of the norm, this would spark additional sampling.  This would indicate that the environment had changed.  The average does not tell the whole story.

    0
    #82664

    Chip Hewette
    Participant

    Amen on looking for out-of-control points on the R chart.  No question that the average doesn’t tell it all!
    Didn’t see your e-mail address, Mr. Floyd…

    0
    #82687

    Gabriel
    Participant

    Boys (and girls),
    I think you are not putting yourselves in the customer shoes.
    If I am the customer, I want my supplier to deliver only good parts with minimal control/inspection/scrap/rework/recheck costs. Any supplier is charging -indirectly- all customers for all those costs. Where does the supplier gets the money to pay for them?
    Reching this point requires a process with a good capability. But in this case, the supplier’s process does not have a good capability -yet. In this case, and given that it is a critical characteristic, the priority goes to “only good parts”, leaving the “minimal cost” to be developed in the future (specially if it is a new process). If the characteristic was not very important, one could agree a temporary deviation with the supplier to keep the “minimal cost” as the priority and develop the good capabilty in the future.
    I agree with checking the mesurement system, to analize the process to see what control parameters can be adjusted to keep the process on target and minimize variation, etc, etc, etc.
    I am sorry, but in the meantime, given that it is a critical characteristic, ONLY 100% CHECK IS ACCEPTABLE.
    One posted above that the sampling frequency is not a function of the Cpk, but it is about catching special causes of variation. This is absolutely true.
    But one very important thing has to be realized, and I think it should have been posted before by many of you:
    A PERFECTLY STABLE PROCESS THAT IS PERFECTLY ON-TARGET STILL DELIVERS BAD PARTS IF IT HAS A POOR CAPABILITY (what is poor and what is good to be agreed between the customer and the supplier, but it seems to be 1.33 in this case).
    If the process remains in control (and that is what you check with the samples and the control chart) that means that the process delivers its usual distribution, WHICH INCLUDES TOO MANY BAD PARTS IF THE CAPABILITY IS POOR.
    Even if you use the samples to check not only stability but also conformity, YOU CAN ESAILY HAVE BAD PARTS BETWEEN TWO GOOD SAMPLES IN A STABLE PROCESS WITH A POOR CAPABILITY.
    Imagine the following simple process. You drop a die. That gives you an output from 1 to 6 with an average of 3.5 and a standard deviation of 1.7 (it’s a fair die). Now you establish an SPC control Xbar/R with sugroups of 5. The standard deviation of Xbar will be 0.76, so the limits for Xbar (at ±3 sigma Xbar) will be 1.21 and 5.79. As long as the process remains in control, you will very seldom find an Xbar beyond the control limits or another OOC signal (such as seven points above the average) just by chance. And the same for R. If you introduce a special cause (the die is changed by another numbered from 3 to 8, or the “appraiser” suddenly starts adding 3 units to the value of the die, or the “instrument” sarts adding some random error between -3 and +3, etc) you will eventually find the corresponding OOC signal. If nothing of this happen, you will have a stable process.
    But the let’s say that the specification is “3.5±2”. Now tell me ¿Which sampling frequency will ensure that you are not delivering bad parts? None!
    You get the following samples, let’s say every 15 minutes: (2,3,5)(3,3,5)(2,2,4)(1,4,5). All the samples delivered in-control popints. The last sample, however in-control, contained 1 non conformiing part. What do you do? Many bad parts could be produced anywhere between good samples. Remember that the non conformity was not due a special cause, but it’s a normal output of this process, so you can not take last “good” sample as your “problem started” point. So you reaction plan should be “reinspect the whole production”. If you decide to reduce the sampling ferequency, for exapmle to 5 minutes, you only change how often do you find a bad part in a sample, and you increase the sampling/control cost. But your decision will still be “we are finding bad parts in a stable porcess, so we have to recheck the whole production. Why not to save time and establish a 100% check from the beginning? At least, you won’t need to recheck the samples already taken for the SPC.
    You said that you want to use your data to convince your customer that a higher frequency is not allowed. The only way you would convince me of that (speaking about a characterisitic that is critical to me) is to prove that your % of bad parts is less than some samall value we agree, which is the same that saying that the Cpk is greater that some value we agree. If you can’t prove the Cpk, you can’t prove the % of bad parts with the same data. If you can prove low rate of bad parts, you can prove the Cpk with the same data (an we would not be discussing this).
    Don’t say taht you checked 500 parts and found not a single one to be out of tolerance. That doesn’t give a good confidence that the rate of bad parts is less that 1 out of 500. For example, if you wanted to prove with a 95% of confidence that you have no more than 500PPM (1 out of 2000), you should find 0 defectives among 6000, or 1 defective in 9500.
    In conclusion. Do whatever you can to improve the process. In the meanwhile, check 100%.

    0
Viewing 16 posts - 1 through 16 (of 16 total)

The forum ‘General’ is closed to new topics and replies.