iSixSigma

Control Limits

Six Sigma – iSixSigma Forums Old Forums General Control Limits

Viewing 18 posts - 1 through 18 (of 18 total)
  • Author
    Posts
  • #49583

    baskaran
    Participant

    Hi,
    I have came across with the situation that, control limits are at negative values.
    But as per my knowlegde the no negative will be entertined because its not having any meaning.
    Now situation is,
    I am working in the IT Industry where we are measuring person Effort as measure and we are calculating % of effort varation based on the Estimated as Actual.
    Sometime its come in negative value when we calculate lcl.
    Please can anyone clarify can we have negative value in the control limits.
    Thanks
    Baskaran.

    0
    #169672

    Amirtharaj H
    Participant

    Baskaran,
    You can covert the -ve value to zero. We can’t take the -ve value for any limits.
    Thanks
    Amirtharaj
     
     

    0
    #169692

    Dr. Ravi Pandey
    Participant

    Bhaskaran,You have to first understand the process and chose the right distribution.  You can not just put zero for the negative value-that would be wrong. I have not thought through your data..but here are some comments.There are distributions that will not give you negative limit such as p/c etc charts.  If you chose your distribution-granted if it supposed to be represented by bionomial/poisson processes, then you would not have the negative limit.I think it may be a bit late in your case.  However, otherway to avoid negative is by increasing the sample size. Hope that helps.rgds-ravi

    0
    #169704

    DaveS
    Participant

    Dr. Ravi Pandey, 
    In SPC, p or c charts often give negative LCL, it is a convention to force this to 0. Most statistical software does this as a default.
     

    0
    #169705

    Dr. Ravi Pandey
    Participant

    You can create anything artifically…but it does not make it right.  Given the general knowledge about these things, making them zero or leaving them as they are have not much meaning….
     
    The best solution is to increase the sample size to lift it above zero….anything is else is a compromise.  I am not sure Bhaskar will have any necessarily change in his result or interpretation whatever he choses.
    I was commenting on theory. 
    It is a mute discussion.
     
    rgds
    ravi
     

    0
    #169706

    Mikel
    Member

    Hey Doc,Two things -1) Stop advertising with your link on your posts. This site is not for
    that.2) We’ve had a series of people who attach Dr. to their name over
    the years that don’t know squat. You appear to be the latest. This
    guy is charting variation which is naturally bounded by zero. To
    just change the lower bound to zero is right. Telling him to
    increase sample size is just stupid.

    0
    #169709

    Dr. Ravi Pandey
    Participant

    My friend Stan,
    I do not get involved in personal comments.  Read the note and you will understand my opinion and guide.
    As for as doctorate is considered, come by sometime we can talk about whether it is fake or real…May be you can establish the credibility of American Universities by judging the quality of my doctorate.
    As for as website, that was not an advertisement.  I am not a regular visitor here…I leave info so that if anyone wants to contact me, they know where to look for.  And I am sure if I am making any policy violation, some official will inform me.
    I do not hide behind pseudo name.
     
    Peace Friend….
    rgds
    -ravi
     

    0
    #169718

    Mikel
    Member

    Doc Ravi,I did not say your doctorate is real or fake, I said your advice was
    worthless.Posting your company’s web site is advertising, even a PhD can
    figure that out.I do not hide behind some psuedo name either and I don’t give BS
    advice. Stop advertising and answering questions that you give no thought
    to. We have enough mediocre knowledge without you already.

    0
    #169724

    Swaggerty
    Participant

    Stan, lets keep personal attacks out of this….the forum is not for this either. On another note, I agree with you. Dr Ravi, at times, I have felt that you may be advertising yourself a bit too much…and I’m sure it is not an individual perception. If you would like to help, please do so without any fuss….if somebody wanted consulting services, I’m sure there are other places to look

    0
    #169727

    Bower Chiel
    Participant

    Hi Baskaran
    It’s not clear from your post what type of chart you are using.
    I work with people caring for stroke patients and one performance measure we look at is the proportion of patients having a brain scan within 48 hours of admission to hospital. The natural choice of chart would appear to be a p-chart. However the p-chart is based on the binomial distribution which presupposes that the probability of a patient being being scanned within 48 hours is constant. This is not the case as radiology provision is not the same during the night as it is during the day etc. so we therefore used Individuals(X) charts of the proportions expressed as percentages. For some hospitals, where the proportions scanned within 48 hours are relatively high, the upper control limit turned out to be in excess of 100. In such cases we placed an upper bound on the Individuals chart at 100. (Minitab has a mechanism for doing this easily.) Clearly we no longer have the possibility of a signal providing evidence of special cause variation via a point falling above an upper limit but we were able to use other tests, such as the occurrence of 8 points in a row above the centre line, to yield evidence of process improvement. We have successfully used the Individuals chart in this way to demonstrate the impact of new faster scanners and the subsequent earlier administration of drugs for secondary stroke prevention.
    If you are using a p-chart in a situation where the proportion of non-conforming items is low, resulting in a negative lower limit, you could increase the sample size to get the lower limit above zero. Some algebra shows that the sample size must exceed 9(1-p)/p when you are using the standard three-sigma limits. Thus if the proportion non-conforming was around 2% you would require a sample size in excess of 9(1-0.02)/0.02 = 441. Thus you might opt to take samples of size 450, were such a sample size feasible.You can avoid the difficulties associated with impossible limits on Shewhart charts by using CUSUM charts. In situations where there is no actual target value I often use what Professor Roland Caulcutt refers to as a post-mortem CUSUM and use the mean of the string of data values as the target and often get very useful insights into process changes as a result.Best WishesBower Chiel

    0
    #169728

    Sorour
    Participant

    Dear Dr. Pandey,I used Minitab sometime ago to study this problem. My approach was to scale all values to a higher magnitude to eliminate the negative CLs, test stability, and then scale back.Perhaps I’m misguided, but I’ve never found a case where the assumption of setting a negative CL = 0 posed a problem with the assessment of stability.As for your view that constrained values have a non-normal distribution, I’m not sure because I thought multiplying a normal distribution by a scalar would not change the shape of the distribution.Perhaps someone else can clarify?Cheers,
    Paul

    0
    #169730

    Mikel
    Member

    What’s personal about asking him to stop advertising and giving
    round worded, text book advice? The guy;s offered nothing, but gets to advertise for it.Some people are impressed and think it’s important when they see a
    Dr. of PhD attached. There’s nothing impressive about this guy.

    0
    #169731

    Mikel
    Member

    Having them default to zero has been done since Shewhart and taught
    in SPC books.It is evidence that you are dealing with a skewed, naturally bounded
    distribution. Read Wheeler if you think that it is an issue.

    0
    #169739

    Dr. Ravi Pandey
    Participant

    Paul,
    I was not going to make anymore comments here..but you got me intrigued.  While I am at, I will add few more infomation…hope you do not mind.
    In engineering, we make lots of assumptions.  The results and the risks associated with those assumptions are what we need to know.  So, making control limits to zero is something a simplification and there are risks.  Now the question is what is the extent of that risk.  Without offending Stan about my PhD, I recall I had developed a model *that was million years ago) that no one could solve…it was non-linear PDE.  I knocked for help to some top notch mathematician and in the end did what every engineer does.  I made assumptions, simplified, and solved.  Now, anyone reading those publications has to understand that the result that I published were with those assumptions.  That is all.
    Minitab has very good team…and they have done a lot of good work including I once asked them to add additional product feature and they did – granted I have not used Minitab in last few years.  However, they also run into issues here and there.  I am not sure now, but I had big discussion about their destructive GRR – nowhere they explained that what isthe risk/uncertainilty of their approach.  Whether they fixed it or not – I am not sure.  The method they chose is what world uses that does not mean it is right…It is just best at this time available.  It is right with the assumptions.
    I am going to give an example of zero limit…take it as concept – I do not want to start another chain.  Think of a kite, when it is in the air, it goes up and down little bit, and you know what to do…and kite is flying at mean height with few few feets up and down.  Now if you put an artificial limit on either side of the mean, you do not know naturally how far up or down the kite might have gone…you do not know the extent of variability…and thus you do not know the capability either of you as flier in what band you can maintain the height. And you do not know the type of measures you need to control the height.
    So, do we put artificial limit..of course. Is it right thing to do? I do not know. I can definitely say that is sometime practical thing to do….If the problem you are solving has little liability – you might be OK if not – the business or you have to face the consequences of not understanding the risks.
    That is all.  I am a biggest proponent of Six Sigma – so much so that I used it for corporate level strategic analysis -but I do not use it blindly.
    On the scalar multiplication, I have not looked into. If you find out, let me know. On the surface, it sounds OK.  however, I am not sure if that would address your issue of CL=0.  Would that not just scale the standard deviation also?
    I am not statistician…so it takes me some effort to come to conclusion to questions as you raised…
    Hope that makes some better sense…If you want to discuss further, pls send me an email…I do not think it is of interest to most of the people here.
    rgds
    -ravi
     

    0
    #169744

    Mikel
    Member

    His parameter of interest has a natural limit of zero. Move the control limit to zero and move on to a relevant issue.
    Doc Ravi – send me an email if you still don’t get the simplest of concelts.

    0
    #169745

    Craig
    Participant

    Baskaran,
    If you are measuring “person effort”, and are calculating % of effort varation based on the estimated versus actual, it seems like negative values are likely. Correct me if I am wrong, but if your IT group estimates 40 hours for one of the tasks, and it takes 30 hours, the result is -10/40 or -25%.
    If this is how you generate your plotted value, you do not have zero as a natural limit. Please clarify.

    0
    #169747

    Sorour
    Participant

    Dear Dr. Ravi,I meant to imply you scale the raw data. Clearly, if you increase the magnitude of the data, the standard deviation as a percentage of the magnitude becomes less.I believe we share the same approach – don’t assume anything you can check yourself.Regards,
    Paul

    0
    #169753

    Taylor
    Participant

    Baskaran
    Your Estimation of Actual is the problem. You can handle this in one of two ways.

    Treat the negative numbers as special cause and set to zero
    Revaluate your “estimation as actual” so that Negative Number for LCL is not possible
    As hacl has stated it is possible to have negative LCL for your calculations

    0
Viewing 18 posts - 1 through 18 (of 18 total)

The forum ‘General’ is closed to new topics and replies.