SPC Question
Six Sigma – iSixSigma › Forums › Old Forums › General › SPC Question
 This topic has 35 replies, 10 voices, and was last updated 15 years, 9 months ago by Helper.

AuthorPosts

January 18, 2006 at 7:12 pm #42048
Hi,
I have a question regarding SPC in manufacturing.
At my plant we use coded data for SPC readings every 2 hrs for 24 hrs production.The subgroup size is 5 and we are using X Bar R Chart.
How often should the control limits be recalculated?As of now no change has been made to the process although a project to reduce variation is underway.
By looking at the chart we can see some out of control points, but as of now we do not know how to prevent it from reoccuring
Thanks for your help
Vinod0January 18, 2006 at 8:36 pm #132535
Stefan SzemkusParticipant@stefan Include @stefan in your post and this person will
be notified via email.The control limits are normally recalculated only when the process has been changed in some substantial way.
More importantly, why are you maintaining SPC charts? What is the goal? If you hope to use SPC to identify “special causes” and eliminate them, you need to investigate what happened to the process at the times you see signals on the chart.0January 18, 2006 at 9:15 pm #132536Hello Vinod,
You ask a very good question. As you know each time a subgroup is applied, it adds to the equation that calculates the control limits. So with each subgroup, the control limits should be recalculated. Your analysis for special cause, however, should wait unitil you have at leaste 20 – 24 data points. You can do some preliminary special cause analysis at 10 points but it would not be advisable to take any action until at least 20 – 24 data points. In your case, that would seem to be at the end of the second day?
Hope this helps…
That being the case, I have a question for you. How many products per hour do you have coming off of the production line?0January 18, 2006 at 10:07 pm #132541we have 8500 products per 8hr shift with 40 mins break
0January 20, 2006 at 9:09 pm #132638Hello Vinod,
Thank you for the response. I see a couple of things that may be useful to you in your efforts.
1. Since your production line is producing so many units, I would change my observations from every 2 hours to every 15 minutes. Hence at the end of the day you will have around 28 – 32 observations.
2. I would also change my control chart from an x bar r to an x bar s with a sub group size of around 11 – 16.
Both adjustments will make your observations more statisticly efficient and will enable you to arrive at conclusions regarding process stability in about half the time.
Hope this helps……….0January 20, 2006 at 10:15 pm #132644Hi Helper,
Thanks a lot for your reply and your suggestion.Great help indeed.
But should the control limits be calculated after longer periods of time like a week or after a significant process change has been effected??or should we go for every 28/32 subgroups??
Am sorry if I did not get it the first time
Thanks
Vinod0January 21, 2006 at 12:08 am #132653Is it safe for me to assume that you are using a software application to help you construct your control charts? Something tells me that may not be the case.
In either case, I believe I can be of service. If you are in the states, feel free to contact me at 480 755 6494, 8am – 5pm MST. I will be more than happy to walk you through it.
Ready to help….0January 21, 2006 at 2:35 pm #132658Helper ,
Just a question for you :
You said to the frequency of sampling should be after 5 mins. Now how can you be so sure that this frequency is the right one.
Regards
j
0January 22, 2006 at 6:45 pm #132669Hello J,
Thank you for writing. For production rates over 100 per hour, I would recommend that samples should be obtained every fifteen (15) minutes.0January 26, 2006 at 1:49 pm #132898To all who posted, I would suggest reading “Understanding Statistical Process Control” by Wheeler and Chambers. Stefan is on track.
0January 26, 2006 at 6:08 pm #132925???
Are not control limits recaluclated/ammended with the application of a new observation? Are not control limits calculated from the data and as the number of data points get embeded on the chart does that not adjust the control limits?
Thank you for your “go read a book” suggestion, that is very helpful. I am interested in your thoughts to the questions above and how relevant they are to the initial questioner who was seeking help.0January 26, 2006 at 6:50 pm #132933
Jonathon AndellParticipant@JonathonAndell Include @JonathonAndell in your post and this person will
be notified via email.Donald Wheeler presented an interesting set of criteria, which I will badly paraphrase: When the process behavior clealry has changed
– When the cause of the change is known
– When the change benefits the customer
– When you anticipate that the change is “permanent”If all four criteria are met, go for it.0January 26, 2006 at 7:41 pm #132936
Dr. ScottParticipant@Dr.Scott Include @Dr.Scott in your post and this person will
be notified via email.Jonathon et. al.,
I believe this is what Stefan was suggesting, and this is absolutely correct.
Dr. Scott0January 27, 2006 at 2:31 pm #132982Helper, I apologize if I seemed short that was not my intent. If you recalculate the limits at every point how are you going to know when things are going wrong? Again Stefan hit the nail on the head; only recalculate when you know the process has changed and why. Another post gave 4 criteria that seemed reasonable.
Vinod, the why to prevent out of control points is to improve the process so that it is behaving predictably.
0January 31, 2006 at 1:40 am #133119Interesting question RonJ. I ran some numbers today. The case: I am tracking cycle time in a processing center for every day in the month. At day number 15, my results are as follows:
avg: 27.1, UCL: 35.7, LCL 18.4.
Three days later I run the numbers again, my results are as follows:
avg: 27.6, UCL: LCL 19.4.
When I get to the end of the Month:
avg: 28.4 UCL: 35.8, LCL 20.9
Each time I added a subgroup or several subgroups, the consequence of doing so automaticly recalulated the control limits. Not by much but to hold true to having control limits, the recalculation must occur as a consequence of new data being entered into the equation. You must agree, yes?
As an FYI, the control chart detected a trend from day three through day 7. I dared not draw any conclusions until I reached day 22 or 25 but non the less, for that time special cause variation was present in my process performance. It still held until day 25.
I am not sure what book everyone else is reading from but with every subgroup your mean may change and so will the range, thus, by mathmatical consequence, so will your UCL and LCL.
Perhaps I am off base?0January 31, 2006 at 1:54 pm #133126Hello Helper, I’m afraid no I do not agree. When you have determined that you have a stable process and you are comforatable with the amount of data you have used to calculate the limits, set the limits and do not recalculate until the process has changed.
Each new point is a snapshot in time of your process and you are comparing that to limits based on your know stable process.
Perhaps others can explain it better. Below you will find what other folks have said about recalulating limits.
Good Luck.
https://www.isixsigma.com/library/content/c021202a.asp
https://www.isixsigma.com/offsite.asp?A=Fr&Url=http://www.qualityadvisor.com/library/control_charts/recalculate.htm0January 31, 2006 at 2:20 pm #133131
Johnny GuilhermeParticipant@JohnnyGuilherme Include @JohnnyGuilherme in your post and this person will
be notified via email.The control limits shoud have been calculated once the the PC (process capability) study was conducted. Prior to the process capability study you should have done some sort of optimising on the process in order to determine the optimal control settings (together with upper and lower ranges for these control settings). The way i understand it you should not have to recalculate your control limits again.
Just my opinion.
Johnny0January 31, 2006 at 5:24 pm #133145Hi Ron,
I think I see what you are saying. I just cannot get around the fact that each time you add a subgroup to the analysis, the math dictates that the new data from the sub group will require an adjustment to the control limits. After all your grand average will change and so will your range. I am assuming, of course, within your sampling plan that this will occur between the first and 25th observation. Otherwise, you cannot avoid the math in the calculation for your analysis. Otherwise what ever conclusion you make, after your 25th observation, will be erroneous.
How do you get around the math? I am curious.
If you get a chance, give me a ring sometime today: 480 755 6494. Perhaps I am misunderstanding you.0January 31, 2006 at 5:34 pm #133147You do not continue to recalculate control limits forever, or after every data point or subgroup is added. Once you have achieved statistical control you set your current control limits so that they do not change!
You cannont allow your control limits to keep moving indefinetly, unless your just making pretty charts.0January 31, 2006 at 5:51 pm #133151Yes, I understand the assertion completely. Here is what fouls me up:
The laws of probability that govern the detection of special causes are based on the fact that the limits are established at +/ 3 STD from the mean. We all know that. The question is that your mean from observation 5 to observation 25 will change. Yes? It is problematic if you seek to find special causes before 22 – 25 observations. Agreed? There for if you are analyzing process performance for special causes, and if you don’t recalculate the control limits from observation 5 to 25, you are not actually using the laws of probability correctly. More over, if you do not recalcualte the control limits from observaton 5 to 25 are you not using the proper equations. Every software application on the market will recalculate the control limits based on new observations.
The math dictates it….. You have to agree that the math dictates it. Yes?0January 31, 2006 at 5:57 pm #133152Yes/No. You do not “set” (not allow recalcuation) the control limits until AFTER you have achieved statistical control. To achieve statistical control you should have at least 30 subgroups for variable data, and probably much more for attribute data.
Yes – I also do not interpret “out of control” points until I at least have 30+ points/subgroups.
Every software in the market will not do that! There are options to set control limits where you want and to force a certain mean or std dev. At some point (after you have reached statistical control) you need to have a way to keep the UCL/LCL constant. All good SPC software has methods to do that!
You agree? Logic dictates it? Yes?0January 31, 2006 at 5:58 pm #133153
HorticultureParticipant@Horticulture Include @Horticulture in your post and this person will
be notified via email.I’ll agree with that.
The aim of the game with control charts is to spot parts (or measurements) that are so different from the “norm” that something odd most likely happened to drive them that there. The thought is that you then chase down the odd thing and stop it happening again so that over time, the process becomes less variable.
The size of the “norm” is set from your initial sampling (LCL, UCL and Grand Average). Only after significant process changes should the “norm” be reestablished and your LCL, UCL and Grand Average be recalculated.
However. If you are in the phase of establishing what the “norm” is, I see no reason to not calculate the LCL etc for every point just to see.
0January 31, 2006 at 6:01 pm #133154Yes. :) I agree…
0January 31, 2006 at 6:31 pm #133155Ok……. I think we are somewhat in agreement. I do agree with the assertion regarding the need to have a certain number of subgroups to determine statistical control. And the more subgroups the better.
Here is where I disagree: Once the process performance is in statistical control, that does not mean that the mean and the STD will remain the same. In fact, due to the Heisenberg’s Uncertainty Principle, one thing that we can count on is that your next 30 observations will have a different grand mean and standard deviation. If you freeze the control limits and continue collecting data, you really don’t have control limits because they are not based on the new data you collected, they are based on the old data you used when the process performance stabalized.
Logic dictates that if your control limits are calulated from the data, and you have new data than you will have … You agree?0January 31, 2006 at 6:58 pm #133157There is no doubt that different (i.e.. additional data) will have a different mean. The question is…. Is the new mean so much different then the old mean that your care about it?
I do not recall saying that the mean for additional data will be the same.
Sample statistics are just samples… we will never know what the population mean is.. we just estimate it. So we will have a different estimate each time? Yes. Do I care? Only if it is significant. What is significant? Anything that violates a special cause rule. (same logic applies to std dev)
0January 31, 2006 at 7:13 pm #133159Good point……… Lets put it to the test. Lets say that after your first 30 subgroups that you find your process to be stable. You lock in your control limits and then collect another 30 subgroups (Xbars chart) with a subgroup size of 32. Here is the question: is it possible to violate some of the special cause rules, like the one or two sigma rule, and not be able to detect it with the control limits that you chose not to recalculate? If so, is your process peformance free of special cause variation. Is it possible that the actual variation in the second control chart is wider than that of the first, thus producing special cause rule violation and not detecting it? And if so, has the performance of your process improved?
Don’t adjust your dials this is only a test0January 31, 2006 at 7:26 pm #133162I think it is certainly possible that the additional data may not violate a special cause rule that calculating new limits would have. I also think the reverse is possible. But, I think this is all theoretical B.S. that doesn’t really have application.
Cheers – and thanks for the chat!
0January 31, 2006 at 7:33 pm #133164Hello Everybody,
Thanks a lot for all the inputs and suggestions.
Let me give you a little bit more detail
total number of shifts =3(8 hrs each)& 1 operator each
approximate number of products per shift 8500 units
total 25,500 units
As of now every 2 hrs the operators input measurement readings into the SPC Software(QI Analyst)
Sub group size of 5
A look at the xbar/R chart gives us the mean and UCL and LCL
For every new subgroup added the mean and UCL/LCL change.
There is a process improvement project going on at present,but no changes have been to the process so far.
Is there a way to prevent the software from recalculating the control limits every time a subgroup is added?what would be an appropriate timeline?/number of subgroups/
Thanks
Vinod
0January 31, 2006 at 7:37 pm #133167
HorticultureParticipant@Horticulture Include @Horticulture in your post and this person will
be notified via email.Vinod,
Are you certain that your subgroup size is constant and doesent vary from 5?
Is it attribute data that is being collected?
0January 31, 2006 at 7:43 pm #133168Data type variable(measurement of length in inches.there is a target value and +/ 0.3 inches tolerance)
when the operators key in the values they punch in the number of points above or below the target value
sub group size is always 5
heres an example
target length 22.5 inches
operator measurements 1)22.6
2)22.1
3)22.8
4)22.8
5)22.7
so the operators punch in 0.1,0.4,0.3,0.3,0.2
Thanks
Vinod0January 31, 2006 at 7:58 pm #133169Hello Vinod,
thank you for providing the detail. I would answer the question in your last sentance(s) with the following.
Question: Is there a way to prevent the software from recalculating the control limits every time a subgroup is added?
Answer: I would advise against that. The reason your control limits change is because they are designed to yield limits based on the grand average +/ 3 std. With each added subgroup, the grand average will change and, as a consequence, the control limits will change also.
Question: what would be an appropriate timeline?/number of subgroups/
ASSUMING VARIABLE DATA (Measuring to spec)
Answer: I would continue collecting data until you reach about 22 – 30 subgroups/observations per chart. For a process that produces more than 100 units per hour, I would take subgroups every 15 minutes. In your case, that would be 8 hours per shift X 4 subgroups per hour would equal 32 observations per shift. Then analyze your control chart for special causes. I would also be sure to ad in process capability analysis and meausrement systems analysis to understand what is out of spec and how much noise (variation) is due to your measuement system.
Just one persons opinion.
Hope this helps….0January 31, 2006 at 8:05 pm #133170Thank you for the additional detail. With respect to CPK analysis, I am curious about your spec limits. You conveyed that your target length is 22.5 inches. Is there a +/ margin of acceptance that you use for process capability?
0January 31, 2006 at 8:13 pm #133172
HorticultureParticipant@Horticulture Include @Horticulture in your post and this person will
be notified via email.Vinod,
What software are you using?0January 31, 2006 at 8:30 pm #133173USL=3 TENTHS OF AN INCH ABOVE THE target and LSL IS 3 tenths of an inch below the target
QI Analyst software is being used
Thanks
Vinod0January 31, 2006 at 9:36 pm #133177
HorticultureParticipant@Horticulture Include @Horticulture in your post and this person will
be notified via email.Vinod,
QI can deal with control limits in a number of ways. It can recalculate on the basis of the last 30 measurements or predefined limits can be used.
To switch, Open a data file, go Setup, Characteristics, Variables – on the Control Limits item hit “Define”, on the window which opens up hit “More Options”.
This gives you access to select the types of Control limits you want.
Regarding the choice of Control Limit types have a read of this thread and decide for yourself.
I hope this helps.0May 1, 2006 at 9:13 pm #137058Hi J,
Sorry for such a lagging response. Been verrrrrrrrrry busy! But to answer your question, he mentioned the production line was producing about 1000 units per hour. taking a sample of 5 every 15 (fifteen) minutes has been an accepted rule of thumb. I wish I had a more scientific explanation for you. But thats all I have.
Hope this helps……….0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.