how to open up control limits for process with very good sigma
Six Sigma – iSixSigma › Forums › Old Forums › General › how to open up control limits for process with very good sigma
 This topic has 13 replies, 9 voices, and was last updated 20 years, 5 months ago by Mike Carnell.

AuthorPosts

March 3, 2002 at 12:15 am #28902
For a xbarR chart, where parameter has very good sigma, which will result in very tight control limit.This will result in many outofcontrol points, even where these parts are perfectly good parts and still faraway from the product spec limits. Anyone has any idea on how to relax this control limits ?
0March 3, 2002 at 2:30 pm #72738You can’t. The control limits are what they are. The fact that you have very many outofcontrol points, then your process is unstable and may produce defects in the near future. Even if the outofcontrol points are still well within specification, you need to decide if it is economically viable to continue searching and removing the special causes in this process. Only you can determine if is worth the cost to your process.
Eileen
Quality Disciplines0March 3, 2002 at 8:41 pm #72742
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.Sam,
You have two concepts confused. Control charts are not designed to tell you if a product is good or bad. They are insight into process behavior. An out of control point should keep you from shipping product but it says you have something different going on in your process. I don’t understand why you wouldn’t want to know what that is? It can provide a starting point for where to get your control further up stream. Once you understand those factors you could eliminate the control chart you are concerned about now.
Good luck.0March 4, 2002 at 1:35 am #72743Hello Tim,
Thanks for your comments. The problem that I’m seeing are slightly more complex. Here is the actual data.
Parameter : length
The parameter has very good sigma with good cpk and ppk. When we try to plot it on XbarR chart for SPC monitoring purposes, the very tight sigma induces very tight control limits on the Xbar chart (because of the Xbar formula calculation).
UCL calculated using Xbar formula = 12.15707
CL calculated = 12.15537
LCL calculated = 12.15367
Specs = 11.59 12.81 ( 12.2+/0.61)
Process Sigma = 0.00113
Mean = 12.15367
As can be seen, the UCL and LCL calculated are very close, only about 0.0034 mm difference. Comparing that to the product specs tolerance width, this control limit range takes up only 0.27% of entire specs tolerance width. Under this circumstances, management does not wish to adjust machine ( not economically viable). Furthermore Cpk every week > 1.50.
In such condition, if we still want to use SPC chart to monitor, what variable chart would be less sensitive to minor shifts, and yet be able to detect bigger shifts ? Appreciate if anybody who can advice.
Sam.0March 4, 2002 at 3:14 am #72744PY,
There is an error in you data or your calculations somewhere. The first rule of SPC is the process has to be in control. If it is, you prove this by plotting the data on a control chart. The data must fall within the control limits and not show any trends. Properly calculated control limits with bracket 99.73% percent of your data and will show suspect points only 0.27% of the time (or almost never). If, in turn, you say you have points that are out of control fairly often then there has to be an error in the calculations, or, as you say, the process is not in control…. and the calculated control limits are therefore invalid. You can not calculate control limits for a process that is out of control because you will find that random uncontrolled “special causes” (or uncontroled affects on the process) will in fact, kick the process around, affect the mean, have uncontrolled wide non standard variations that will exceed your control limits. Even though the engineering specs may be much wider than the control limits, with unexplained, uncontrolled variation, there is no guarantee that the process will not exceed the engineering specs. You might catch it in your sampling plan, and you may not… but there may be out of spec product just the same.0March 4, 2002 at 3:52 am #72746Hi,
Why dont you try with CUSUM control chart. This helps in finding very small shifts, which other control charts cant find. Need some more information mail me at
[email protected]
thanks
sridhar
0March 4, 2002 at 7:15 pm #72787
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.I think I uderstand you, because I had a problem like this before, in the following way. The problem is not when the process variation is too small when compared with the specification, but when it is too small when compared with the resolution of the instrument. As long as the instrument is much better than the process, you should not have this problem unless you have problems with the process itself. You’ll see, my parameter was weight, and my digital scale had a resolution of 0.1g. The repeatability and reproducibility was excelent (anybody could weight the same pice hundred of times and get the same value). The problem was that the variation of the process was more or less +/0.05g. When charting XbarR, most ranges were zero, and those subgroups with a range of 0.1 (the minimum you can get with this instrument) was out of control (UCLR was about 0.07). More or less the same happened with X. Why? Because it is not true that the range was 0 for those points where this was the result. Range 0 would mean two identical pieces, and we know that that is not possible. The problem was that the range was to small for the instrument to be detected, and it was rounded to 0. Doing this, you are reducing the “within the limits” zone both for R and for Xbar. The solution?
1) the best would be to buy a better instrument, much more precise than the process (10 times).
2) If that is not possible, an alternative solution is to use the following rules for the range, to be used when the process variation is not signtifcantly larger than the resolution of the instrument: a) If the range of a subgroup is larger than zero, use this value as range of that sugroup (as always). b) If the range a subgroup is zero, do not take zero but use 1/3 of the resolution of the instrument as the range for that subgroup. This will make the average range bigger (in fact, closer to the “unmeasurable” reality), increasing the “within the limits” zone both for R and Xbar.
I have to advice you that I couldn’t find this in any book. It is something I invented, but it is supported by theory. The two rules a) and b) are based on taking the most probable value for the “true” range, admiting that the instrument has a rectangular distribution for each value shown, with a width of +/ 1/2 of resolution, (for example, in the case of my scale with a resolution of 0.1g, I am admiting that when the scale shows 25.3g, the “true” weight of the piece is somewhere between 25.25g and 25.35g, with equal probability of being anywhere within this range). It’s a sound hypotesis, isn’t it?
If your problem wasn’t with the resolution of the instrument, sorry for the speach.
If you want more backgroun on the theory that supports this, or have other questions, contact me at [email protected].
Hope this helps.
Gabriel0March 4, 2002 at 7:52 pm #72790
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.Gabriel,
I’m not sure how you can get into this situation without extrapolating a reading which the gage does not have resolution to. Typically when you have a resolution issue and you plot it on a control chart you get large jumps in the data with nothing in between or long flat runs of the same readings. If you use the rules for out of control situations then the resolution issue gets flagged very early in the process. You have an OOC and the solution is new gaging.
0March 4, 2002 at 8:00 pm #72791
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.I am not sure if you have ever tried to use this type of chart in a manufacturing environment but it is really a poor application. It is not a very intuitive control chart. It would seem that the issue isn’t detecting small changes since the process can withstand large shifts and it still doesn’t affect product quality. The real issue seems to be that they are controling a Y (Y=(f)x. They need to be on the x’s not the Y’s.
Sorry about refering to the author of the original question in the third person but I could not get back to the original question.0March 4, 2002 at 8:30 pm #72796I’ve used a method called Modified Control Chart (Xbar R) . There was a chapter about it by Douglas Montgomery in his book ” Introduction to Statistics” , that will help open up the control limits, in a “statistical” way – an economic way of proposing a solution to manufacturing industries with wellcontrolled variability sigma. It allows the distributions to shift ( provided the sigma spread is much less than tolerance spread). I have used it before and found it useful. Anybody ever use it before ?
0March 4, 2002 at 8:55 pm #72799
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Mike: I agree with you. That’s why I said that the best is to change the gage. The other solution is just an imperfect alternative when you don’t have further posibilities. For example, in a manufacturing enviroment measuring better than 1um (1×106 meters) is almost impossible. But there are some processes twith a precision of a few um. Would you invest in a gage that can read 0.1um and that must be used in a laboratory enviroment by qualified people, if the tolerance range is in the order of the 15um?
If the reolution is not “too bad” you can try my aproach and see whether it works or not, before invest in a new gage.
In the example of the previous message, the resolution was too bad (compared with the process spread), and we finally bought a new scale with resolution 0.01g. You must keep it very clean, perfectly leveled and protected from any air movement. But we had no choice.
But, according to a simulation I run, if the spread of the process would have been a little larger it would have worked (with those rules, not without them). In this case, may be it would be good to revew some rules. For example, 7 poins in a row above the mean: may be it would be better, in a case like this, to consider that this rule is met when the seven points are actually “ABOVE” the mean, and not “AT OR ABOVE” it. With the (false) hypotesys that what you are measuring is a variable that can take any value, the probability of such a value to be exactly at the average is zero. When the resolution is in the “border”, the probabiliti for a value to be in the mean is either zero if the mean is not a multiple of the resolution, or has a finite and not small value if the mean is a multiple of the resolution. Remember that “out of control criteria” are based on patterns that have about 99.9% of probability of not belonging to the distribution of a process that is under control.Anyway, I repeat: Those rules are not always aplicable, and it is always better to buy a better gage.Gabriel0March 5, 2002 at 1:32 pm #72818Remebr the control limits are the voice of the process. They are not affected by the specification limits. Specification limits are utilized to calculate sigma values not control limits.
Your control limits are what they are. They tell you that based on the within group variation your control limits predict where 99.7% of your data should fall.
Do not change the limits perhaps you want to tighten your specification.
0March 5, 2002 at 9:16 pm #72839
Andy SchlotterParticipant@AndySchlotter Include @AndySchlotter in your post and this person will
be notified via email.Sam,
How sensitive is your measuring tool? If youre measuring .000s, it should be calibrated to read .0000s. Are there multiple operators? How much process variation is caused by variation in the measurements? A gage R&R would answer this. Are your sample intervals appropriate? Consider running a 50piece capability study check 50 consecutive pieces, calculate +/ 3s and plot those readings. You may find that your intervals are too far apart to properly detect process shifts.
Hope this helps.
Andy0March 5, 2002 at 10:06 pm #72849
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.Gabriel
We seem to be in violent agreement on the gage issue.
Sam or Py or whoever asked this question,
If management doesn’t want to shut it down then don’t. If you start running OOC conditions mixed with the incontrol stuff the limits will open up on their own.
With a Cpk >1.5 just stop using the chart. Pretty low likely hood of a defect and you aren’t using it anyhow. When you choose to ignore the very purpose of the chart, why would you do it?0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.