Control Limits
Six Sigma – iSixSigma › Forums › Old Forums › General › Control Limits
 This topic has 17 replies, 10 voices, and was last updated 13 years, 10 months ago by Taylor.

AuthorPosts

March 13, 2008 at 12:14 pm #49583
baskaranParticipant@baskaran Include @baskaran in your post and this person will
be notified via email.Hi,
I have came across with the situation that, control limits are at negative values.
But as per my knowlegde the no negative will be entertined because its not having any meaning.
Now situation is,
I am working in the IT Industry where we are measuring person Effort as measure and we are calculating % of effort varation based on the Estimated as Actual.
Sometime its come in negative value when we calculate lcl.
Please can anyone clarify can we have negative value in the control limits.
Thanks
Baskaran.0March 14, 2008 at 12:44 pm #169672
Amirtharaj HParticipant@AmirtharajH Include @AmirtharajH in your post and this person will
be notified via email.Baskaran,
You can covert the ve value to zero. We can’t take the ve value for any limits.
Thanks
Amirtharaj
0March 14, 2008 at 3:53 pm #169692
Dr. Ravi PandeyParticipant@Dr.RaviPandey Include @Dr.RaviPandey in your post and this person will
be notified via email.Bhaskaran,You have to first understand the process and chose the right distribution. You can not just put zero for the negative valuethat would be wrong. I have not thought through your data..but here are some comments.There are distributions that will not give you negative limit such as p/c etc charts. If you chose your distributiongranted if it supposed to be represented by bionomial/poisson processes, then you would not have the negative limit.I think it may be a bit late in your case. However, otherway to avoid negative is by increasing the sample size. Hope that helps.rgdsravi
0March 14, 2008 at 7:00 pm #169704Dr. Ravi Pandey,
In SPC, p or c charts often give negative LCL, it is a convention to force this to 0. Most statistical software does this as a default.
0March 14, 2008 at 7:06 pm #169705
Dr. Ravi PandeyParticipant@Dr.RaviPandey Include @Dr.RaviPandey in your post and this person will
be notified via email.You can create anything artifically…but it does not make it right. Given the general knowledge about these things, making them zero or leaving them as they are have not much meaning….
The best solution is to increase the sample size to lift it above zero….anything is else is a compromise. I am not sure Bhaskar will have any necessarily change in his result or interpretation whatever he choses.
I was commenting on theory.
It is a mute discussion.
rgds
ravi
0March 14, 2008 at 7:19 pm #169706Hey Doc,Two things 1) Stop advertising with your link on your posts. This site is not for
that.2) We’ve had a series of people who attach Dr. to their name over
the years that don’t know squat. You appear to be the latest. This
guy is charting variation which is naturally bounded by zero. To
just change the lower bound to zero is right. Telling him to
increase sample size is just stupid.0March 14, 2008 at 7:41 pm #169709
Dr. Ravi PandeyParticipant@Dr.RaviPandey Include @Dr.RaviPandey in your post and this person will
be notified via email.My friend Stan,
I do not get involved in personal comments. Read the note and you will understand my opinion and guide.
As for as doctorate is considered, come by sometime we can talk about whether it is fake or real…May be you can establish the credibility of American Universities by judging the quality of my doctorate.
As for as website, that was not an advertisement. I am not a regular visitor here…I leave info so that if anyone wants to contact me, they know where to look for. And I am sure if I am making any policy violation, some official will inform me.
I do not hide behind pseudo name.
Peace Friend….
rgds
ravi
0March 15, 2008 at 12:49 am #169718Doc Ravi,I did not say your doctorate is real or fake, I said your advice was
worthless.Posting your company’s web site is advertising, even a PhD can
figure that out.I do not hide behind some psuedo name either and I don’t give BS
advice. Stop advertising and answering questions that you give no thought
to. We have enough mediocre knowledge without you already.0March 15, 2008 at 6:04 am #169724
SwaggertyParticipant@George Include @George in your post and this person will
be notified via email.Stan, lets keep personal attacks out of this….the forum is not for this either. On another note, I agree with you. Dr Ravi, at times, I have felt that you may be advertising yourself a bit too much…and I’m sure it is not an individual perception. If you would like to help, please do so without any fuss….if somebody wanted consulting services, I’m sure there are other places to look
0March 15, 2008 at 8:56 am #169727
Bower ChielParticipant@BowerChiel Include @BowerChiel in your post and this person will
be notified via email.Hi Baskaran
It’s not clear from your post what type of chart you are using.
I work with people caring for stroke patients and one performance measure we look at is the proportion of patients having a brain scan within 48 hours of admission to hospital. The natural choice of chart would appear to be a pchart. However the pchart is based on the binomial distribution which presupposes that the probability of a patient being being scanned within 48 hours is constant. This is not the case as radiology provision is not the same during the night as it is during the day etc. so we therefore used Individuals(X) charts of the proportions expressed as percentages. For some hospitals, where the proportions scanned within 48 hours are relatively high, the upper control limit turned out to be in excess of 100. In such cases we placed an upper bound on the Individuals chart at 100. (Minitab has a mechanism for doing this easily.) Clearly we no longer have the possibility of a signal providing evidence of special cause variation via a point falling above an upper limit but we were able to use other tests, such as the occurrence of 8 points in a row above the centre line, to yield evidence of process improvement. We have successfully used the Individuals chart in this way to demonstrate the impact of new faster scanners and the subsequent earlier administration of drugs for secondary stroke prevention.
If you are using a pchart in a situation where the proportion of nonconforming items is low, resulting in a negative lower limit, you could increase the sample size to get the lower limit above zero. Some algebra shows that the sample size must exceed 9(1p)/p when you are using the standard threesigma limits. Thus if the proportion nonconforming was around 2% you would require a sample size in excess of 9(10.02)/0.02 = 441. Thus you might opt to take samples of size 450, were such a sample size feasible.You can avoid the difficulties associated with impossible limits on Shewhart charts by using CUSUM charts. In situations where there is no actual target value I often use what Professor Roland Caulcutt refers to as a postmortem CUSUM and use the mean of the string of data values as the target and often get very useful insights into process changes as a result.Best WishesBower Chiel0March 15, 2008 at 9:09 am #169728Dear Dr. Pandey,I used Minitab sometime ago to study this problem. My approach was to scale all values to a higher magnitude to eliminate the negative CLs, test stability, and then scale back.Perhaps I’m misguided, but I’ve never found a case where the assumption of setting a negative CL = 0 posed a problem with the assessment of stability.As for your view that constrained values have a nonnormal distribution, I’m not sure because I thought multiplying a normal distribution by a scalar would not change the shape of the distribution.Perhaps someone else can clarify?Cheers,
Paul0March 15, 2008 at 11:56 am #169730What’s personal about asking him to stop advertising and giving
round worded, text book advice? The guy;s offered nothing, but gets to advertise for it.Some people are impressed and think it’s important when they see a
Dr. of PhD attached. There’s nothing impressive about this guy.0March 15, 2008 at 11:59 am #169731Having them default to zero has been done since Shewhart and taught
in SPC books.It is evidence that you are dealing with a skewed, naturally bounded
distribution. Read Wheeler if you think that it is an issue.0March 15, 2008 at 11:41 pm #169739
Dr. Ravi PandeyParticipant@Dr.RaviPandey Include @Dr.RaviPandey in your post and this person will
be notified via email.Paul,
I was not going to make anymore comments here..but you got me intrigued. While I am at, I will add few more infomation…hope you do not mind.
In engineering, we make lots of assumptions. The results and the risks associated with those assumptions are what we need to know. So, making control limits to zero is something a simplification and there are risks. Now the question is what is the extent of that risk. Without offending Stan about my PhD, I recall I had developed a model *that was million years ago) that no one could solve…it was nonlinear PDE. I knocked for help to some top notch mathematician and in the end did what every engineer does. I made assumptions, simplified, and solved. Now, anyone reading those publications has to understand that the result that I published were with those assumptions. That is all.
Minitab has very good team…and they have done a lot of good work including I once asked them to add additional product feature and they did – granted I have not used Minitab in last few years. However, they also run into issues here and there. I am not sure now, but I had big discussion about their destructive GRR – nowhere they explained that what isthe risk/uncertainilty of their approach. Whether they fixed it or not – I am not sure. The method they chose is what world uses that does not mean it is right…It is just best at this time available. It is right with the assumptions.
I am going to give an example of zero limit…take it as concept – I do not want to start another chain. Think of a kite, when it is in the air, it goes up and down little bit, and you know what to do…and kite is flying at mean height with few few feets up and down. Now if you put an artificial limit on either side of the mean, you do not know naturally how far up or down the kite might have gone…you do not know the extent of variability…and thus you do not know the capability either of you as flier in what band you can maintain the height. And you do not know the type of measures you need to control the height.
So, do we put artificial limit..of course. Is it right thing to do? I do not know. I can definitely say that is sometime practical thing to do….If the problem you are solving has little liability – you might be OK if not – the business or you have to face the consequences of not understanding the risks.
That is all. I am a biggest proponent of Six Sigma – so much so that I used it for corporate level strategic analysis but I do not use it blindly.
On the scalar multiplication, I have not looked into. If you find out, let me know. On the surface, it sounds OK. however, I am not sure if that would address your issue of CL=0. Would that not just scale the standard deviation also?
I am not statistician…so it takes me some effort to come to conclusion to questions as you raised…
Hope that makes some better sense…If you want to discuss further, pls send me an email…I do not think it is of interest to most of the people here.
rgds
ravi
0March 16, 2008 at 7:51 am #169744His parameter of interest has a natural limit of zero. Move the control limit to zero and move on to a relevant issue.
Doc Ravi – send me an email if you still don’t get the simplest of concelts.0March 16, 2008 at 12:32 pm #169745Baskaran,
If you are measuring “person effort”, and are calculating % of effort varation based on the estimated versus actual, it seems like negative values are likely. Correct me if I am wrong, but if your IT group estimates 40 hours for one of the tasks, and it takes 30 hours, the result is 10/40 or 25%.
If this is how you generate your plotted value, you do not have zero as a natural limit. Please clarify.0March 16, 2008 at 12:50 pm #169747Dear Dr. Ravi,I meant to imply you scale the raw data. Clearly, if you increase the magnitude of the data, the standard deviation as a percentage of the magnitude becomes less.I believe we share the same approach – don’t assume anything you can check yourself.Regards,
Paul0March 16, 2008 at 4:26 pm #169753
TaylorParticipant@ChadVader Include @ChadVader in your post and this person will
be notified via email.Baskaran
Your Estimation of Actual is the problem. You can handle this in one of two ways.Treat the negative numbers as special cause and set to zero
Revaluate your “estimation as actual” so that Negative Number for LCL is not possible
As hacl has stated it is possible to have negative LCL for your calculations0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.