iSixSigma

Expanding control limits

Six Sigma – iSixSigma Forums Old Forums General Expanding control limits

Viewing 40 posts - 1 through 40 (of 40 total)
  • Author
    Posts
  • #52385

    Picklyk
    Participant

    Hello All,
    While reading a stats book, I noticed that when calculating the upper and lower control limits for an Xbar chart the formula uses ±3*ó/sqrt(n). I was curious if the 3 could be increased so that the limits would then expand, allowing more points to fall within the control limits.
    Thanks for your insight.
    Regards,
    Jay

    0
    #184273

    Old MBB
    Participant

    Now you’re starting to tick me off…
    +/- 3 std devs will include 99.74% of Normal variation, and anything that falls beyond those limits is therefore flagged as unlikely.  Sure you can expand those limits, but it would be like ignoring your gas gauge and “low fuel” light and deciding to wait until the engine dies… 
    Between this post and your earlier one about Xbar sampling you seem to not care what your data says, you just want it to pass, even if you have to change the rules of SPC to do it…
    Say it ain’t so…

    0
    #184274

    Taylor
    Participant

    Jay
    I thought you said you were reading a “Stat Book”. Did you not get the point about +3 Std Deviations

    0
    #184276

    Picklyk
    Participant

    I’m sorry if I offended you. I was just curious and it’s always better to ask than to just go and do something blindly. When reading my statistics book, it stated that “the use of charts based on 3 standard deviation limits is traditional, but tradition is certainly not inviolable.” This led me to believe that using 4 or 5 standard deviation limits may be a possibility. In the case of my data, all of my data is well above the manufacturers tolerance, so increasing the standard deviation limits and including points outside of 3 standard deviation limits would not negatively influence the quality or the integrity of the part.
    Thank you for your response. I appreciate the knowledge you have passed along to me; however, given that this is a forum for learning, I do not appreciate your snide remarks. I am an engineer-in-training, and am slowly learning the ropes. I may not have all of the knowledge, but at least I am willing to put myself out there and ask questions.

    0
    #184277

    Picklyk
    Participant

    Apparently I didn’t, and the reason for this confusion was a result of this clause in my textbook:
    “The use of charts based on 3 standard deviation limits is traditional, but tradition is certainly not inviolable.”
    Believe me, I’m not stupid, and this is a completely ligitimate question. Please respect my curiousity and the fact that I am new to this and may not be as knowledgeable as you.
    Regards,
    Jay

    0
    #184279

    Taylor
    Participant

    OK Jay, Lets talk this through. If you expand your control limits, what is going to be the result of doing that long term in your process?
    Sorry my apparent rudeness, but just returned from the Chiropractor and my girlfriend says I cannot be mean so I come here.

    0
    #184281

    Picklyk
    Participant

    That’s a good question. I’m not entirely sure. Care to enlighten me?
    No worries about the rudeness. Just please understand that I am genuinely trying to understand this. When I present the information to my boss, I need to understand this inside and out. If he says, “well why don’t we just expand the limits,” I need to be able to explain to him flat out why it cant’ be done.
    Thanks again.
    Jay

    0
    #184282

    Taylor
    Participant

    Jay
    In control charts, +3 std dev is used to divide the control chart into zones both above and below the CL (Xbar) such that 99.74% fall within that range.
     Under the assumption that the mean (and variance) of the process does not change, the successive sample means will be distributed normally around the actual mean. We also know (because of the central limit theorem, and thus approximate normal distribution of the means) that the distribution of sample means will have a standard deviation of Sigma (the standard deviation of individual data points or measurements) over the square root of n (the sample size). It follows that approximately 95% of the sample means will fall within the limits ± 1.96 * Sigma/Square Root(n) In practice, it is common to replace the 1.96 with 3 (so that the interval will include approximately 99.7% of the sample means), and to define the upper and lower control limits as plus and minus 3 sigma limits, respectively.  By expanding to anything above + 3 sigma would allow the process to go outside the tolerance limits.
    I hope this clarifys your question.
     

    0
    #184284

    Robert S
    Member

    Chad, now explain the 1.5 thingy to him. Just when it all started to make sense….

    0
    #184286

    Cone
    Participant

    What you have read in your book is exactly right. What it did not
    tell is when and why to do it. You don’t just do it for convenience.Let me take off on Old’s example of a gas gauge. I drive a Mustang
    and it hits E and I start to coast. No room for expanding anything,
    in fact I contract, I get nervous below 1/8. My wife drives a Toyota
    Sienna (I know – yuk, but the kids really like it and it has room an
    SUV never dreamed of), It has a gas gauge and an electronic DTE
    display. When DTE says 0 the gauge is sitting right on E – it takes
    16.1 gallons to fill every-time. Occasionally, kids are crying or
    fighting or whining about watching yet another Barbie movie for
    the three year old (yea for headphones) and we go below E and the
    DTE just says 0. One time I put 18.5 gallons in to fill up. I checked
    the owners manual and it has a 21 gallon tank! Just for the hell of
    it, I drove 75 miles beyond 0 based on I know the mileage is steady
    at 20.1 and sure enough, it took 19.8 gallons. Point is Toyota
    decided to make the minivan robust with respect to the
    information to the driver on gas. I think they know about whining
    and crying and decided it would be unacceptable to run out of gas
    in a car made for kids.What does this have to do with your situation? IF you have an
    extremely competent process that for some reason you still choose
    to control chart (I would not by the way, there are better controls in
    this case), loosen the limits but not so much as it could cause you
    issues because the customer perceives a difference. Note I did not
    say anything about specs, specs are usually pulled out someone’s
    butt and not based on science. Differences within spec can really
    screw up your customer’s process – read Taguchi. Know your
    product and know your customer.BTW – you should have seen my wife on that 75 extra miles.

    0
    #184292

    Picklyk
    Participant

    Thank you for your insight Chad. This definitely clarifies things for me.
    It appears this is probably not the best method for the data I had. I might as well elaborate on what I’m actually doing, and perhaps you could suggest another method of approaching this.
    Basically, the company I work for repairs aerospace components using thermal spray coatings. In order to eliminate 100% testing of the components for bond strength and hardness, the company has opted to use some sort of process control. With the data we have, the established control limits on the Xbar charts exclude a large portion of the data points. The data we have for bond strength and hardness exceeds the values that the manufacturer requires. Any suggestions on how to approach this?

    0
    #184293

    cobb
    Participant

    JayYu are right . Control Limits work as filters that separate out ‘noise’ ( random variation) from ‘signals’ ( non- random variation) .As you maybe aware , the variation contained within the control limits is random variation. You are expected to adjust the process when you detect non-random variation ( after ascertaining the root causes ) . The more you expand these limits ( 4sigma , 5 sigma etc ) , the higher is the risk of interpreting ‘signals’ as ‘noise’ and hence not reacting to potential ‘out – of control’ situations . The reverse is also true i.e by squeezing these limits to , say 2sigma , you run the risk of interpreting an ‘in control’ or ‘stable’ process as ‘out of control or ‘unstable’ . These are typically what we call Type I & Type II errors in hypothesis testing. Having said that , I recommend you stick to the ‘traditional’ 3sigma limits , as they have been found to be highly robust by practioneers over the past 8 – 9 decades !Trust the above helps .

    0
    #184295

    Rick L
    Member

    As in application, some processes do not worth to invest a lot time and effort to make it into the smaller control limit zone so some project managers did have wider control limit at the very early stage so that they can close the project and move on to other potential projects.
    but as in continouos improvement practice, we always track down these projects and will come back to them with further enhancement when other prioritized projects completed.
    technically it is not recommended to expand the control limit as it will put you exposed into noise and error risk but when in real business application…..we did need to be flexible with the situation to drive the cost benefit balance.
     
     
     
     

    0
    #184298

    Craig
    Participant

    The answer is yes, as long as you have the data to support the change. If it is a highly capable process it is worth pursuing. It is a matter of protecting the customer and keeping your line running in a stable and economical fashion.
    Learn the concepts of ARL, alpha risk, OC curves, etc. If you are a good process engineer, you have reduced variation such that the limits have been progressively tightened. There is a point at which it is not reasonable to tighten the limits any further.

    0
    #184299

    Cone
    Participant

    Amen – good advice. Now the guy just needs to understand.

    0
    #184301

    Picklyk
    Participant

    This is from a previous posting of mine and describes the project I’m working on:
    “The company I work for repairs aerospace components using thermal spray coatings. In order to eliminate 100% testing of the test coupons sprayed concurrently with the parts for bond strength and hardness of the coating, the company has opted to use some sort of process control. With the data we have, the established control limits on the Xbar charts exclude a large portion of the data points. The data we have for bond strength and hardness exceeds the values that the manufacturer requires.” 
    So in this case,  we have data that consistently exceeds the values required by the manufacturer, but a large portion of the values do not fit well within a 3 sigma Xbar chart. How could we eliminate 100% testing of the coupons, but still prove that our process exceeds the values required?

    0
    #184304

    Old MBB
    Participant

    Jay,
    The fact that you are working on aerospace components says that we should be very careful to not take risky shortcuts.  I’m guessing that you have one or more test coupons per batch – so each coupon is already a sample.  To then say that you are sampling the coupons suggests that some batches may never be tested.
    The Xbar chart may be telling you that there is unexpected variation in your process, in which case you need to determine what the heck is going on and fix it.  But hey, this is the fun part of being a mfg engineer.
    There may be another explaination which will really get you into Six Sigma fast – Non-Normal Data.  If your measurements have a hard limit (like coating thicknesses can’t be less than zero), then you may have a data set that’s skewed to one side.  Depending on how bad this skew is may throw off your SPC by having points in the tail that are in excess of 3s.  If you want to go down this path you need to check the normality of the entire data set (not the averages), find a distribution that fits (try lognormal first, then weibull) and then determine capability (Cpk, Ppk, or Z) using subgroups, and your spec limits.  If all of that works and looks good do a google on Pre-Control for your ongoing process control.
    Good Luck

    0
    #184305

    Sloan
    Participant

    Jay,
    You made a couple of comments in this post that concern me a little bit. You said, “With the data we have, the established control limits on the Xbar charts exclude a large portion of the data points.” Were the control limits calculated from this new data that you have been collecting or is it maybe a leftover from your previous control methodology? I have seen an instance where the sampling or testing methodology was changed but they used historical control limits or historical mean from previous testing and the data looked really whacky. Remember this is “voice of process” and you should calculate your mean and control limits from your current data. Hopefully I’m preaching to the choir on that one.
    You also said, “…we have data that consistently exceeds the values required by the manufacturer, but a large portion of the values do not fit well within a 3 sigma Xbar chart.”
    This tells me that while you are easily hitting your customer’s target, there may be some special cause variation in your process. Whenever you have unexplained variation in your process your process is by definition unpredictable. So while you are easily within the customer specifications today, you cannot reliably predict that you will be tomorrow until you get your process into statistical control. As OldMBB said, that could be because your data might be highly non-normal, but until you know the cause of your out of control points, you should not rest comfortably and assume that you will always exceed your customer specs.
    Moving the control limits out to 4 std. dev. will only mask what may be a serious problem and make it less likely to see a problem before it hits your customer. Find out the root cause of those out of bounds data points, fix it and then see where your process stands. The root cause could be a result of a process that is out of control or a problem with your measurement system.

    0
    #184307

    BTDT
    Participant

    Jay:Quick review; Control limits(low fuel light) are independent from specification limits(running out of gas).The control limits are calculated to alert you when the process may be going out of control. If your specification limits are outside the control limits, these alerts mean you should look at the process in order to make any corrections BEFORE the process eventually drifts out of specification.Provided you have calculated the control limits on recent data, you should be getting about one data point in 400 outside the limits with only random variation (alpha error – false alarm), anything more means your process is not well controlled regardless of whether these points are within specification. Increasing the limits beyond 3 s.d. is like turning off a smoke detector; your ability to detect real differences (beta error – failure to detect) goes way down.Another possibility is, since the distribution of breaking strength of bonding material typically follows a non-normal distribution, either log-normal or Weibull, you should calculate control charts with those assumptions. This would also explain the larger than expected number of data points that exceed the upper control limits AND upper specification limit.Cheers, Alastair

    0
    #184315

    Don Howie
    Participant

    Gary,
    The question is about control charts, which are about natural limits, and you write about spec limits.  Do you know the difference?
     
    Obviously, you do not understand setting realistic (statistically derived) tolerances if you think the “specs are usually pulled out someone’s butt.”
     
    That is a very ignorant comment.  Maybe you need to understand Design for Six Sigma.
     
    Any you claim to be a consultant!

    0
    #184316

    MBBinWI
    Participant

    Don:  I am a DfSS expert, and I would still agree with Gary that most spec limits are pulled out of smelly orifices.  Too few are derived from a mathematically based cascade from true customer observable acceptability criteria.
    Just my humble opinion.

    0
    #184321

    Don Howie
    Participant

    Your humble opinion is still very wrong.
     
    Just because you worked with a few people that are not intelligent/educated  in utilizing statistically based tolerancing methods does not make it the norm.

    0
    #184322

    Cone
    Participant

    I am a consultant and apparently only one of us has real experience. Come down off of your arrogant high horse if you have something you
    want to discuss.

    0
    #184323

    Cone
    Participant

    Don,Now you have me intrigued. Please tell us about your vast experience
    and how you know so much about the subject.

    0
    #184324

    Don Howie
    Participant

    Gary,
    Real experience?  So your experience comes from working with people who’s specs are from where?
    Nonesense!

    0
    #184326

    Sloan
    Participant

    “Can’t we all just get along?”
    I think it one’s perspect depends largely on the context in which they have seen spec determined. If you deal largely with transactional people-driven specs like “wait times” for call centers or standing in line, then a lot of spec limit determination could very well be just a good guess based on VOC comments or behavior observation.
    If however your experience has been in hard manufacturing where part tolerences are well established based on the downstream needs, then the specs would hopefully be rigorously tested and scientifically calculated.
    My experience is mostly in the transactional side of the house and my experience is that a lot of specs are at best, educated guesses. Keep in mind though that these are often situations where there has never been a spec measured before so an educated guess is far better than measuring nothing at all.
     

    0
    #184327

    Cone
    Participant

    Tell us about yourself and how you know so much.

    0
    #184328

    Cone
    Participant

    Who would want to get along with someone who doesn’t know what
    they are talking about and is belligerent about it.Hey Donnie boy, did you even think about what I wrote to the guy?

    0
    #184329

    Don Howie
    Participant

    Gary,
    So you admit I know a lot!
    Is that because you are wrong?

    0
    #184330

    Cone
    Participant

    I don’t think you know Jack. I am trying to figure out why you think
    you do.

    0
    #184331

    Don Howie
    Participant

    Gary honey!
    You think you know much, but your posts sound like your experience is quite narrow.  
     
    Where did your worked, where specs came from #[email protected]#$%?
     

    0
    #184341

    Cone
    Participant

    All right Don, I went back and reread what I wrote. You want to
    explain in plain English what is not good advice there – even if specs
    are statistically derived?

    0
    #184342

    Cone
    Participant

    MBB,I am pretty sure this is Robert S just misbehaving again.

    0
    #184345

    Craig
    Participant

    How about controlling input variables? You have to demonstrate that if the inputs are on target and in control, the Y’s will be adequate. You might be able to reduce your sampling this way.
    How large are the rational subgroups and what sort of time period do they represent? This could be the reason you are having issues with the chart signalling too much. (poor estimate of standard deviation)

    0
    #184376

    Mario Perez-Wilson
    Participant

    Don,Go easy on Gary. I know him well, he is a good guy and quite knowledgeable. I used to invite him, Mike Carnell and Mikel Harry to the Motorola’s FMU-139 production area every time I was conducting experiments. Gary always made good contributions, and believe me, he has a strong understanding of control-chart theory as well as tolerancing methods.
    The “…spec…butt…” thing is just lapsus calami.
    Mario Perez-Wilsonhttp://www.mpcps.com

    0
    #184387

    Jonathon Andell
    Participant

    The first caveat is the assumption that the control charts drive your organization to make one of three decisions based on what the chart displays. The three decisions are: continue operating with no change, make a process adjustment, or stop the process. If the decisions are absolutely mandatory responses to specific chart outcomes, then we can begin to consider the selection of control limits. If the decisions can be over-ridden by some form of management edict, then the following discussion becomes moot.Assuming we can proceed…It’s not considered hard and fast statistical theory, but the general idea behind the +/- 3s limits is to manage the process economically without excessive risk of decision error. On the X-Bar half of the chart one could decide that the mean has changed when in fact it hasn’t, or that the mean is constant when in fact it is not. The trouble is, we never know whether a decision error actually exists, at least not based on the chart alone.That being said, it could be permissible to alter the limits under certain circumstances. For instance, if the control limits lie well within specification limits (perhaps a proverbial “six sigma” process), and if units of production outside control limits increase costs in a known way, there might be a motive to widen the control limits. Before doing so, however, you would be well advised to bring yourself up to speed on the following issues, or to consult with somebody who can do so: – What is the penalty associated with shipping a rejectable unit of production: costs, litigation, injury or death, reputation, etc.? – What is the statistical likelihood that such an event could occur, based on extensive review of historical data? – Do the rewards objectively justify the risks?If you cannot develop a respectable estimate of risks and rewards, perhaps you should not mess with these limits. It could be like me sitting in a cockpit of a 747 and playing with the controls instead of asking what they do. Not that anybody would be crazy enough to let me into the cockpit of a moving 747, of course…Bear in mind that a number of the “Western Electric” rules are based on data points lying within 1, 2, or 3 standard deviations. Competently changing the limits associated with those rules is a task that is far from trivial, and probably not worth the hassle. You should determine which rules create decision error mis-matches with your new control limits, and opt not to use those rules.Finally, I would phase it in gradually. If the ultimate goal is to expand to, say, 4s, you might want to look at historical data and see what that limit might have done in the past. If you start to move out the limits, consider “baby steps” like 3.1 or 3.2 to start with.

    0
    #184444

    Don Howie
    Participant

    Mario,
    I would not defend him if I were you.  Gary believes that specifications come from people’s behinds!
    You think I would hire a consultant who thinks like that.  Good luck.
     
     

    0
    #184445

    GB
    Participant

    Don,
    I’m with Mario.   Gary is the real deal.
    If you knew which Gary this was, I think your internet bravado would shrink…
    Gary ia “good people” and one of the few genuine “Gurus” in this and other arenas.
    I loved when some blowhard on Linkedin was piping up, incorrectly, about the birth of Six Sigma…Gary called him out and dropped a nuke…He was there to witness the beginning…Same goes for Mario.
    Do you know who Mario is??  Google is your friend.
    …like moths to the flame…

    0
    #184503

    Ppk
    Participant

    Chad explain it to him because Robert S can’t explain it. Another trite comment to disguise the lack of knowledge.

    0
    #184996

    GB
    Participant

    So Donnie,
    Did you ever research the backgrounds of Gary and Mario?

    0
Viewing 40 posts - 1 through 40 (of 40 total)

The forum ‘General’ is closed to new topics and replies.