iSixSigma

SPC Dilemma

Six Sigma – iSixSigma Forums Old Forums General SPC Dilemma

Viewing 40 posts - 1 through 40 (of 40 total)
  • Author
    Posts
  • #41025

    Seaotter
    Member

    Hi,
    I’ll try to summarise this as best I can…
    I have 5 machines running the same (complex)processes. 
    5 variables (say a,b,c,d & e) are monitored on finished goods from each machine and are manually control charted.
    Each machine has the same types of raw material inputs. These inputs are individually variable and from different suppliers who do not necessarily run SPC or SS. So, changing any of the raw inputs affects the process.
    I’ve done some investigating of raw data with Minitab and shown that, regardless of the above, the overall process is capable with regards to the various specifications. In some cases I have Ppk values over 3.5.
    In view of VOC, I am fairly confident that we can reduce our current SPC sampling rate and not impact customer perceived quality.
    What I don’t know is….
    How do I prove it?
    Any thoughts very gratefully received. This is keeping me up nights!
     

    0
    #128217

    Mr IAM
    Participant

    Seaotter,
    Nice problem to have!  I have a few thoughts…
    I think the Ppk > 3.0 proves it.  What does your managment need for proof?  What are the return rates for the characteristics your charting?  Should be zero based on the data right?
    But, if it is cheap to continue charting, and the process of manual charting keeps the machine operators / technicians engaged in the process it could be worth it to keep going, even if that is all it is accomplishing.

    0
    #128219

    SixPence of Sigma
    Member

    Some preliminary questions: Are you using primary or secondary data to make your determinations regarding process performance?  How many data points are you looking at? Are the processes stable and in control?

    0
    #128222

    Seaotter
    Member

    SixPence,
    I’m using raw data. Individuals for each sample for each different variable, so i’m confident in that.
    As for data points (samples), I have thousands for each machine. With regard to Minitab’s calculation of control limits the processes would be OOC but I’m interested in consumer (I may have said customer at first, sorry) perceived quality. What I don’t want to do, due to the massive number of input variables, is start to tighten existing controls and give the engineers an SPC headache when the consumers are not seeing any issues.
    Seaotter.

    0
    #128223

    SixPence of Sigma
    Member

    Based upon your reply, I would not have a lot of faith in the prcess capability numbers.  The process is neither stable nor in control.  I would be interested in the special causes and removing them, then getting a true look at the process capability.

    0
    #128224

    Seaotter
    Member

    I think you’ve hit the nail on the head, in a way. Unfortunately, not all of the processes are as capable as the others and the tests themselves are not cheap when looked at by volume over, say, a year.
    I agree that keeping the operators testing and engaged is a ‘good thing’ and I’m not looking to remove it. What I’m trying to do is rationalise.
    I think what I really need is a way to say “If we reduce our testing of ‘a’ from x to y, we’ll save z and the customer will see no change.”
    Seaotter

    0
    #128225

    FTSBB
    Participant

    Without diving into your capability analysis (I’m assuming you and others are satisfied with these numbers…), it looks like you’re looking for a sample size reduction.  Suggest looking at statistically-based sample size calculations – with all the data you have, you probably have a good understanding of your process spread and customer (aka consumer) requirements, which will lend itself to easy computation of sample sizes.  Here’s a place to start:
    http://www.itl.nist.gov/div898/handbook/ppc/section3/ppc333.htm

    0
    #128229

    Mr IAM
    Participant

    O,  FTSSBB response seems reasonable to me as well… I’d head in the same direction….  Cheers!

    0
    #128232

    Seaotter
    Member

    SixPence,
    Apologies. I would like to ask a hypothetical question at this point.
    “At which point is a process said to be in control?”
    Let us assume that I have a specification that has a target of 50, a LSL of 1 and an USL of 100.
    I do my initial run, calculate my SPC (Xbar-R in this case) with limits at UAL 25 and LAL at 75, and away I go. We investigate special causes and eliminate them. We carry on running and after a while, we no longer run OOC. At any point. So we recalculate the limits to improve the process…..and so on and so forth world without end.
    Eventually, my UAL is at 51 and my LAL is at 49. In relation to my spec this is meaningless.
    As I said, a hypothetical question. The figures I’ve used are ridiculous. But..
    In my final chart I am getting points that are OOC. My process is not stable.
    Does it still matter as far as the consumer is concerned? Or the engineers who have to improve the process?
    Please believe I have no intention of offending, so I’ll qualify my original question:
    “At which point is a process in control ‘enough’?”
    Seaotter
     

    0
    #128233

    Seaotter
    Member

    All,
    Oh deary me.
    Sorry, I seem to have my UAL and LAL the wrong way round in my last post.

    0
    #128234

    FTSBB
    Participant

    A process either has evidence of process control or it doesn’t – there is no “controlled enough”, unless you start playing games like “hey, let’s not react until we have at least 5 point out of control on the chart” – pure nonsense.  If you have out of control points, your process is not stable and it could conceiveably drift outside of specifications any time.  Also, this can blow holes in the capability stats you are using.  Try and determine why the points are out of control and fix it.
    BTW – what is a LAL and UAL – is it synonymous with LCL and UCL?

    0
    #128236

    Mr IAM
    Participant

    This is interesting…
    In practical terms.. I would say a process is “controlled enough” when the Customer is no longer “feeling the variation”.  To O’s point, if you keep moving control limits in after you achieve control, you will be trying to reach a new level and again have points “out of control”, where does the cycle end?
    But, I’m sure folks much smarter then I have pondered this before…..  Cheers!

    0
    #128238

    FTSBB
    Participant

    Point taken, but what I mean is if there are points out of control, your process is not stable.  Period, regardless of where the customer specs are.  Besides, if you’re even looking at the process for improvements, hopefully this is based on some customer complaint in the first place!!!
    Also, you can get pretty tricky with control charting – leave the control limits where they have been historically and then plot your “new” data inside these.  You don’t necessarily have to keep tightening the limits – not the best practice in the world, but it’s not necessarily “wrong”, either.  I’m just nervous my initial response concerning sample size was off-base – as the dialogue continues, O has less and less faith in the process…

    0
    #128239

    FTSBB
    Participant

    BTW – the cycle NEVER ends!  Not until you are the only supplier with exceptional quality and the lowest cost!
    However, by then the SEC will be looking over your shoulder…

    0
    #128240

    Seaotter
    Member

    FTSBB,
    Thanks.
    I’m operating with a warning and an action limit. UAL=Upper Action Limit etc. UAL/LAL would be UCL and LCL. The warning limits are set at 2 sigma.
    Thankfully, I paused for thought before replying to this one, as I’ve read your subsequents.
    So. The nub is, I’m not looking to improve the process as I’m satisfied of it’s control due to the lack of customer complaints. If anything (mea culpa) I think the process is over-controlled. If I can (and have) eliminate special cause events (particularly gage errors, which is another thread altogether) and have confidence in the capability of the process, can I be confident in reducing my sampling frequency? And how can I prove that such a reduction would not impact on customer percieved quality?

    0
    #128241

    Seaotter
    Member

    I’ll tell that to the guys in engineering tomorrow…
    “Never ends! Never! You belong to me now! Fire the operators, we need more engineers! Captain she canna take any more…!”

    0
    #128242

    FTSBB
    Participant

    In that case, I’ll default back to my original post – check out the statistical-based sample size calculations.

    0
    #128243

    jimmie65
    Participant

    Seaotter –
    My personal opinion is that you would be taking a huge risk. If your current sampling shows your process to be out of control, even though you are within spec, then you do not have enough control over your inputs (as you have indicated) to control your outputs. If your inputs changed enough, then you are going to end up out of spec. You can’t really justify reducing your sampling from a statistical basis.
    However, I’m assuming you’re in a low margin, high volume business (based on your comments). So compare the current cost of your testing vs. your proposed cost and the cost of a failure. Look the worst case scenario – if we failed to detect this out-of-spec condition, it will cost y dollars, but we save x dollars in testing. See if this can make a business case for you.
     

    0
    #128245

    Seaotter
    Member

    Jimmie65,
    Your comments make sense.
    Just to throw another factor into the equation, what if the spec is wrong?
    Big what if, I understand. But suppose I can show that both our USL & LSL are wide (or in this case narrow) of the margin?
    What if we were to concentrate more (and less expensively) on testing the inputs? And what if we were to show that the specifications of the finished product are too tight?
    And to throw in another bone….
    How can you convince a supplier to adopt SS? What if your business makes up a very small proportion of theirs?
    I agree with your x saving against y cost. The y cost would have to take into account….I can’t remeber the word…’loss of face’, ‘loss of respect’? That costs a lot. That said, it’s what I’m aiming for.
    Seaotter

    0
    #128248

    Seaotter
    Member

    BTW,
    I’m aiming for the saving. Not the loss of respect.
    I’m new to this.

    0
    #128249

    jimmie65
    Participant

    Seaotter –
    Specs are usually wrong. Ask the engineers who set them up what the VOC had to say…
    Always best to test inputs as opposed to outputs. Can you afford the testing to determine what the specs on incoming material should be?
    My company offers Six Sigma training to our suppliers, and we haven’t had to twist any arms to get them there. But we’re a pretty large proportion of most of their business. It’s worth a try, though.
     

    0
    #128254

    Seaotter
    Member

    Jimmie65,
    Ahhh….
    This is also a hurdle I have to leap.
    The volume of incoming material, compared to resource for testing, is prohibitive. The volume of material produced by our suppliers, compared to our uptake, is negligible. They do do their own testing. And for their purposes it is adequate. For ours it is not. For them to introduce a level of testing that would bring their process ‘into control’ as far as we are concerned, would result in them stopping and starting their own process over and over and over and….there goes a ‘stable process.’
    So. The specs are wrong.
    I would suggest that your recommendation would be to prove the limits of spec? Fair amount of work, but if it makes the point I’m with you.
    It wasn’t the engineers wot invented the spec though. Don’t blame them.
    Seaotter

    0
    #128278

    Charles H
    Participant

    Sea:
    You may as well go ahead and stop your charting and data gathering – sounds like the analysis is all after the fact anyway.  You have no control over your incoming materials, your process is out of control, and your measurement system is broken, all per your input to this thread.  You are not interested in using control charting for improvement of this process, so why bother?  It sounds a lot like you have a predetermined solution to reduce your data gathering and analysis costs as you see no value in it (small wonder), and that you are now trying to use process capability (which, in your analysis, is meaningless) to justify your decision.  Why go through all the bother?  Just stop doing it – no justification needed.  I personaly find tons of justification in your posts to continue the data gathering and maybe even increase its frequency.  But that’s just funny ole me . .
    Charles H.
    PS:  Would also be interested in the 95% and 99% confidence interval for your data sets compared to specs

    0
    #128279

    Whitehurst
    Participant

    Plan a test. What do you found from your actual sampling inspection ? Plan a simulation with different sampling plans and after reduce sampling inspection, accordingly, on random lots, as from FTSSBB post and run tests to see what will escape using new sampling plans. With results evaluate what is best for your business.  
    joe

    0
    #128291

    KKN
    Participant

    From your postings, I gather that you are more interested in what the customer is seeing (VOC and Specs) vs. “process control”. If that is the case, I would focus efforts on finding out the customer’s actual specs (not the engineers, unless they are the customer) and find your Sigma compaired to USL/LSL. If the customer is seeing good Sigma, all is good in their eyes. If not, work to fix it. Keep in mind, though, that there are internal business metrics you want to measure as well (FTY, Cycle Time, $/unit,…) that may drive you to make improvements that may reduce variation.
    Remember: The purpose of SPC (in my laymen’s words) is to give visibility of process characteristics to alert you for changes (shift/drift) in the process. It isn’t for process improvement.

    0
    #128307

    Seaotter
    Member

    Mr H,
    From a purist POV I agree with you 100%.
    The fact that the process as a whole (looking at all the outputs and all the inputs at the same time) is OOC is clear. When looked at batch to batch (inputs) and run to run (outputs) the process can be said to be in control, though. When special causes arise the operators react and adjust, like they should. 
    I’m exaggerating slightly, but the control limits need to be recalculated for each batch and each run. Obviously, this costs $$.
    If I had the time and the resource then I would be pushing for an increase in the sampling and a continuous improvement of the processes.
    That said, taking the ‘pragmatic view’, you’re right. To a certain extent I see the use of SPC for this process as next to useless if the charts are not used to improve it. My opinion, under these circumstances, is this:
    If I start with input ‘a’ and test it I get result ‘x’, which is well within spec. The level of variation of input ‘a’ is such that any further testing will give effectively the same result. The same thing happens when ‘a’ runs out and we start using ‘b’, except ‘b’ gives result ‘y’. And ‘c’ gives result ‘z’. ‘Y’ and ‘z’ are also well within spec, their variation is small but their mean is different from ‘a’. Rather than running around Xbar, I’m now either above it or below it. OOC, or new chart limits. So, test only at a change of inputs (reduced frequency, saves $$), process runs in control.
    I hope that makes sense. It’s probably what I was trying to say in the first place.
    Seaotter.
    P.S. I was going through some of the data again today and was seeing Ppk values at or above 2.25 at the 95% level pretty much all the time. At 99%? Not a clue, but I’ll recheck tomorrow.

    0
    #128308

    Seaotter
    Member

    Yes!
    So, from the customer quality department I find out the areas that the consumers are concerned about and how (even if!) they relate to the processes we are trying to control.
    To go further…
    Some of the issues that the customer have raised are related to the process. These issues are investigated and found to have happened when the inputs have changed. So we monitor at input changes (see the reply to Charles H’s post) intensively and then, once satisfied, let the machine do it’s thing until the next input change. These occur pretty frequently, but not as frequently as we currently test.
    That actually sounds a bit frivolous, but I’m pretty sure it’s right from the consumers point of view. From a strict SPC point of view, it probably sounds hateful.
    Thankyou,
    Seaotter.

    0
    #128310

    miranax
    Participant

    What you have is a “loose cannon” process.  It may be within spec, but it may drift out of spec at any given time.  However, if the specs are “far away” from to control limits, you may have some elbow room to maneuver.  I mean, what’s the point in wasting resources in determining a special cause for material that is within specification?  Now there’s a philosophical question for all BB’s.

    0
    #128320

    Seaotter
    Member

    Nice point, Miranax.
    I understand the rules of SPC. There are other processes to which I will rigourously apply those rules (eg new product development or the introduction of new raws).
    I have already asked the question regarding when a process is in control enough, so I won’t labour the point.
    If I can walk down a corridor with my arms outstretched and not touch the walls, should I make the corridor narrower?
    Also a philisophical question.

    0
    #128332

    CT
    Participant

    Because the true VOC which was designed into the part may have an allowable tolerance, but the optimum performance deliverable is NOT at the said mean. Something like a tolerance of 0″ to 2″ but the optimum performance for assembly or whatever this is may actually be is at .625″. yes the part is functional between the tolerance given but is optimized at the a point which is not the Mean for the Tolerance. Thats why you address special cause for material that is within spec.
    Ct

    0
    #128340

    Seaotter
    Member

    CT,
    In all cases in this process, the target (or optimum) is set centrally with regards to the USL and LSL.

    0
    #128369

    Chalapathi
    Participant

    This is a classical mistake in applying SPC.  Are you aware of Rational subgrouping.  In this case how you collected your samples is the key. You can refer to any good SPC book for more details.

    0
    #128383

    Anonymous
    Guest

    Chalapathi,
    I’ve tried to follow the thread to see who you replied too … it seems to be the original poster. If I’m correct, I’m not sure how you came to this conclusion. How do you know that the within groups variation is greater than the subgroup to subgroup variation .. perhaps I’ve missed the point you’re making.
    My take on the problem is the rather old fashioned view that everything has to be recorded on an SPC chart – or even worse, on every variable that can disturb the response.
    To my mind, one of the problems with the Six Sigma approach (at least the version due to MH) is the assumption that a Ppk = 2 is a robust process, with or without including a shift.
    My reasoning for taking this position is that a robust process can only be realised by setting up the process in a ‘flat’ region of response space.
    There is also another reason for doubting SPC and Ppk > 2, which is that both these assume a ‘natural’ process capability without adjustment. Since many processes have to adjust due to changes in chemical activity of raw materials, this can hardly be considered to meet the required assumptions.
    Cheers,
    Andy

    0
    #128395

    thevillageidiot
    Member

    What about the idea of moving from a Cpk to a Yield based “go/no go” for defectives?  Wouldnt it prove easier to collect for the operators (assuming) and be a better indication of quality as viewed from the eyes of your customer. You can still use less frequent control charting  (p or np) to montior the process and keep the operators engaged, provide management with “proof” of the process quality, and view the quality of the process from the external perspective of your end used.  (defectives for external quality assessment verses defects for internal quality assessment)…..just a thought…not necessarily a good one mind you….

    0
    #128535

    Gourishankar
    Participant

    Seaotter
    Sample size and frequency can be established through Average Run Length ( ARL ) which is given by : 1/p , where p is the probability of any point exceeding the control limit .
    For ex : if  the x ( bar ) chart is within 3 sigma limits , p = 0.0027 , in which case ARL = 370 . This means , even if the process is in control , an out of control signal will be generated every 370 samples.The time intervals can be calculated accordingly.
    However , this is a crude method.
    There are other convenient ways of calculating sample sizes .( Use of OC curves )  I don’t think Ppk values will relate to sample sizes and frequency . Please refer to Statistical Quality Control by Montgomery ( Chapter 4 ) for an excellent treatment on this topic.
    I can do the calculations for you , if you provide me the data.
    I hope this is of some help.
    Gourishankar
    Chennai , India
     
     
     
     

    0
    #128539

    Cravens
    Participant

    I have a spreadsheet that may help with the cost calculation for higher cpk values. It takes the current process and breaks down the cost of cpk values .5 at a time. You will be able to target a value that is cost effective and set the target there. For example I had a similar process to the one you have described and found that I should target a cpk value of 3.5 as opposed to 4 due to the CBA. Might help you set the target, once achieved then set the sample size.

    0
    #128620

    Redgy
    Participant

    I try to look from a different angle on the problem.For me control charts are the tool for detecting special cause events. Even if your process is fine now. There might be a “murphy” waiting to hit you.
    If you reduce your sampling it might take you longer to detect if something (which did not happen yet) goes wrong. Generally I would investigate if you could change your monitoring towards automated online measurement, or making sure that equipment and/or process parameters are generating an alarm when going outside control-limits. This tied up to the SPC you are doing might help in making the right descision
    I have similar dilemma’s in my day to day job.
    Best regards
     
     
     
     
    I’m just wondering what your sample plan really is, and if its a big load for the people to do the charting. If not, then we have to keep in mind that the controlcharts are a method of triggering us if we have a special cause. If you reduce the number of samples, you have to take into account that the detectability if a special cause accur

    0
    #128622

    Szentannai
    Member

    Hi,
    I’m not really familiar with the manufacturing context but I’d ask a few squestions before investing a large effort into test reduction:
    1. Is there a sort of transfer function linking cost and amount of testing ?
    2. Is there some kind of  FMEA analysis concerning the risks of missed alarms in the testing (and false alarms as well, maybe).
    Based on these two items you could figure out how much risks is worth taking for how much cost reduction.
    Maybe you already did this analysis? It would help make the discussion more concrete – along the lines of “I propose to reduce the testing activity by this and this amount by reducing the sampling frequency by this and reducing the sample size by this amount, resulting in increased risks to this and this and savings of about this amount “.
    Would this make sense to you?
    Regards
    Sandor

    0
    #128879

    Michael Schlueter
    Participant

    Hello Seaotter,
    I haven’t read the full thread, but just probed into some of your replies. A few suggestions.
    1) You can prove it pragmatically. You have access to the full data set (each machine, at your current SPC intervals). By skipping data (in a way which reflects your future reduced sampling rate) you can ‘simulate’ your future strategy with your past data. From comparing old (full data set) with your new strategy (reduced sampling) you can draw conclusions.
    2) You stated that the result doesn’t vary, regardless of variability introduced by the machines and the suppliers. To me this sounds like your product is robust (i.e. only little sensitive) to those changes. You can consider a series of Taguchi experiments to further increase robustness (i.e. reduce standard deviation even further, center on-target) until the point, where SPC might become obsolete (at least in its present form).
    3) You can consider on-line quality engineering, which is a way to balance required quality levels, inspection frequency, adjustment frequency (you can adjust your product back on-target, can’t you?), cost-of-low-quality imposed onto your customer, measurement cost and adjustment cost in a rational way.
    4) You stated the process is rather complex. I have access to best practices to simplify complex things.
    I’d be happy to help you with issues #2, #3 and #4.
    Hope this helps. Kind regards, Michael Schlueter
     
    (You can cont(r)act me by replying to this post.)

    0
    #128917

    Seaotter
    Member

    Michael,
    Following your points in order:
    1) I’ve done this, and reducing the testing regimeby 75% (ie taking every 4th subgroup from the available data) I don’t seem to lose any sensitivity. I may be looking at this the wrong way, but the OOc points occur in the same places and at a proportionate frequency. It’s like a fractal.
    2) If I was to call a batch of raw material a subgroup, then the within subgroup variation is low in comparison to the between subgroup variation. So the results vary between subgroups, not within them. The unfortunate situation is that this between variation is not (generally) predictable. I’d say that the process is robust, but the inputs are not. This is my own opinion here, and I’d be pleased to hear arguments pro and con: If I am willing to accept that I have a given level of variability in my inputs which will shift my mean test result from point x to point y to point z and back again, that the level of variability within any given input is the same as that of any other, then the test frequency should be set at the change of inputs?
    3) The adjustment frequency is actually the issue. A lot of time is spent making adjustments to the process which don’t actually effect the results. When the inputs change, the mean changes but the variability does not. Regardless of adjustments. Clearly, this results in OOC conditions and a change of SPC limits and a loss of time and $. Now I will sound like a pariah but: using the range (and/or SD) limits, the process is still running within spec. From a consumers POV the product completes it’s functional requirements, so they don’t care whether the mean is at x, y or z. I’ve previously asked the question, “When is a process in control enough?” Should we be aiming for the target or be happy that we are in spec? In this particular case, being off-target does not mean low quality. Being out of spec does. If there is a hint that the product is out of spec, then I don’t let it out the door.
    4) I’d probably be in breach of all kinds of corporate rules if I started detailing the complete over-engineering of this process. It’s probably not at the level of tornadoes, but I’d surprised if it’s far away.
    Hell, you could cont(r)act me!
    Thanks for your reply, by the way. I seem to be losing a little of what I’m trying to get across. This is a cost of quality vs. voice of the consumer issue, which you’ve grasped. And apologies to everyone if that hasn’t come across in my previous postings. As I see it the measurement and adjustment (and re-measurement) costs far exceed the consumers expectations of the finished product.
    Kind regards also,
    Seaotter

    0
Viewing 40 posts - 1 through 40 (of 40 total)

The forum ‘General’ is closed to new topics and replies.