iSixSigma

Calculating Control Limits for Unilateral Data

Six Sigma – iSixSigma Forums Old Forums General Calculating Control Limits for Unilateral Data

Viewing 22 posts - 1 through 22 (of 22 total)
  • Author
    Posts
  • #39095

    Restagno
    Member

    I need to come up with a set of control limits for a part characteristic that only has an USL. The data of this process is taken in subgroups of 5.
    The problem with this can be divided in two main issues:
    1) When I plot the data in a Xbar/R chart, the within and overall variation is very significant and it is the direct result of many of our quality rejects. The data displays lots of outlayers -points outside USL-.
    2) If I calculate the control limits based on this data, they seem to be very thight; On the other hand if I transform the data and calculate the control limits again they seem to be quite loose. On either case, the members of my team think that that’s not the way to go -note:not of them has real experience dealing with spc-.
    Honestly, I don’t have much experience dealing with this type of data and would very much appreciate any help you can provide me with.
    Thank you very much!!

    0
    #118136

    walden
    Participant

    If your current USL is unobtainable at this time, lower your USL to your lower bound spread of your confidence interval. You are not transforming your data, just your current goal. Can you collapse like defects? This should improve your variance with only a few outliers that should be a red flag and be addressed first.
    Chris
     

    0
    #118138

    Restagno
    Member

    Thanks Chris but we all well know -here in the plant- that there are some special causes of variation that we need to take care of and that makes our final product out of spec. resulting in customer complaints and things like that nature.
    The true is that in this company, spc never has been used. Now I’m trying to introduce and educate the people to use it.
    I know for a fact that our current process are capable of producing goods well within our current specs but there are just some sources of variation that need to be in control and because there is no control over this, frequently we have parts outside the specs.
    What I’m trying to do is demostrate analytically that we use control limits to catch and learn about points outside these boudaries, but I’m having some problems defining this limits in the first place.

    0
    #118140

    walden
    Participant

    I agree with you, based on the inconsistent quality of parts I see everyday, that you have a product that needs to meet certain needs.  Based on your earlier statement, “the within and overall variation is very significant and it is the direct result of many of our quality rejects. The within and overall variation is very significant and it is the direct result of many of our quality rejects,” the production line’s ability to maintain a consistent process is degraded. By lowering the USL for presentation purposes only, you mask (hide) the smaller variances to key on the extreme outliers. Then you look at the frequency, cost, and ownership of the process and decide if it needs to be fixed. If not, you filter out this variable and attack the others.  Stats wise the outliers and variance before collapsing the data are proof in the pudding that the process is broke.
    Chris

     

    0
    #118149

    Mikel
    Member

    Sergio,
    Those who offer advice without understanding your data are dangerous. Telling you to lower your USL is just plain dumb.
    Post your data including what you did to try to transform it and you will get real help. If you need, send the data to [email protected] and I’ll help.

    0
    #118151

    walden
    Participant

    Stan,
    Am I to understand that you never collapse data or lower thresholds to reduce variance in order remove redundancy to highlight outliers that still exist?  Or do you just “fix” variance randomly hoping you have found a correlation calling it a cause. At no time did I tell the poster to keep USL lowered for final product output to the customer.  I introduced an alternative to data presentation and review to help filter out noise. Yes, noise can be taken from the back office to the floor if understood correctly. Of course, I’m sure you have never seen a process change implemented that was not thoroughly looked into and understood ust to see it fail.  What’s funny is I rarely see data presented in forum before you post a suggestion. Of course I did ask in the post name, what others such as yourself thought, but I guess that was “dumb” of me.
    Chris
    Chris
     

    0
    #118153

    Mikel
    Member

    What in the world are you talking about?

    0
    #118178

    Cone
    Participant

    Hi Chris,
    I like the concept of filtering out “noise” factors effect on the data.
    It is something new to me, and would like to see an example of how it is applied, so that I would be able to perhaps use it myself ext time that I am analyzing data with lots of outliers.
    My email is:[email protected]

    0
    #118197

    Restagno
    Member

    Chris,
    Thanks for your advice and comments, but isn’t the job of control limits to do what you have just adviced?
    Stan,
    I have sent you a set of data I’m currently working on. The subgroup size is 5. This is what I did to it,
    1)   I calculated the control limits assuming the data is normal and presented to my team. But they all though that they were too tight; then I run a normality test and found that my data is not normal
    2)  So I transformed the data using Johnson transformation with a P-value of 0.005 and they did not like it either.
    At this point I’m thinking if these people are loosing the battle before it even starts or the results of my analysis are in fact inadequate.
     
    Thanks for your help.

    0
    #118355

    Obiwan
    Participant

    Sergio…confusing!…it does not MATTER whether someone LIKES the results!  A control chart (with control limits!) is used to determine the stability of the process.  If it is NOT stable…then DO something about it…but NOT changing the control limits because you don’t like them.
    That is DUMB!
    The limits are NOT too tight, they are what they are.  Jeez!
    And, by the way, a Unilateral Specification Limit has nothing to do with what the control limits should be on an Xbar/R chart.
    Obiwan

    0
    #118358

    Ken Feldman
    Participant

    I love when you put aside the light saber and wap’em with the blaster.  Scalpel vs cudgel.  Glad u have joined us.  Haven’t heard from Sigmordial for a while.  Now we need some more of Yoda and Luke and the whole gang is back.

    0
    #118360

    Obiwan
    Participant

    Even the most elan of the Jedi Masters must occasionally pick up a cudgel and use it…sometimes the light saber is simply too elegant to use!
    Obiwan

    0
    #118361

    Ken Feldman
    Participant

    Bless you my old foe and teacher.  Given some of the posts of late, a bulldozer is too elegant.  Keep up the good work.

    0
    #118378

    Restagno
    Member

    Obiwan,
    Your post lacks character. The fact that you emphasized on whether the control limits are liked or not, makes me understand that you did not even read and understood the hearth of the matter.
    I would like to read a good comment from a knowleadgable, experienced person, but I’m start to think that I will not get that from here.
     
    Thanks

    0
    #118389

    Mikel
    Member

    Sergio, you got good advice. Take time to think about it instead of posting this whinning.

    0
    #118927

    RonJ
    Member

    Obiwan,
    That is how I understand it too … use the chart to identify special cause, then fix it. If you know your process is eat-up with special cause, control charts are not the tool to use.

    0
    #118945

    Fred Bradbury
    Participant

    Sergio,
    Could I see the data please? I might be able to offer some help. I have had many instances with data bound by zero. My email is [email protected]

    0
    #118946

    Fred Bradbury
    Participant

    Sergio,
    Sorry I posted in the wrong location. I had a similar instance with data bound by zero in my last job. It was for electrical leakage with zero being perfect. Once I look at what you have, I might be able to offer some suggestions.
    I am looking for my BB project if I find my presentation I can forward that if you like.

    0
    #119106

    Ang
    Participant

    Sergio:
    My thoughts.
    1. It appears that you’re dealing with a distribution that is not in statistical control. Despite all the tools that people like to use in unusual situations, a fundamental tenet of SPC is that the process must be in statistical control before control charts can be used effectively.
    2. A control chart is a snapshot of the process at a given moment. Taking the quality rejects out of the distribution before the SD and limits are calculated gives you numbers that mean nothing relative to the process.
    Suggestions:
    1. Work with whomever you need to in the organization to get the process in statistical control.
    2. Identify the major causes of variation (assignable or special causes) and do your best to eliminate them.
    3. Once that’s done, establish control limits from all the data and use those until it makes sense to change them (process change, for example)
     
    Remember, if the advice you’re getting sounds too complicated, it’s probably not right.
     
     

    0
    #119112

    Restagno
    Member

    Peter,
    Thank you very much for your advice. I certainly appreciate your thoughts and suggestions.
    Sergio

    0
    #120681

    Barphly
    Participant

    Guys,  I deal with quite abit of one sided data, flyers, and CL and CPU questions.  Any help is appreciated… for now on the UCL’s.
      Unfortunately we deal with data on processed material that is packaged and then analyzed.  Special causes are not easily identified (therefore the flyers).  The variables effecting the final analysis are extensive.  We are committed to offering our customers “control charts” of these analysis results and battle over what data we want include to calculate UCL’s.  If we throw out flyers without identifying causes are we “cheating” even though our process is more reflected by the remaining data?  And then it comes down to how far do you go throwing out flyers?
    Anyone who has simular experiences, (perhaps this is another forum stream), but please offer some thoughts.  Thanks,  Barphly

    0
    #120685

    BTDT
    Participant

    Barphly:I assume ‘flyers’ means outliers. Including or excluding a few outliers has little numerical effect on the conclusions. If it has a large effect, then I am worried that you are deleting enough real data to make the problem look less obvious.You should be reporting the average of a delivery, not the value of an average delivery. People play these games all the time.What can seem like ‘flyers’ can be caused by other factors. Your data might be non-normal for other reasons.https://www.isixsigma.com/forum/showmessage.asp?messageID=71680Plot all you data and put in notes for the exceptions, if you can explain them. If you can’t, then you have a problem. You can’t just say that 93% of your data is OK.Would you be satisfied if you found out that 93% of patients in emergency are treated in less than 90 minutes, while the remaining 7% are treated within 3 weeks?You will learn more from your ‘flyers’ than you will from the majority of your data.This sounds too much like management trying to change the numbers to make things look good. If you start on this slippery slope, where do you draw the line between acceptable data and ‘flyers’?BTDT

    0
Viewing 22 posts - 1 through 22 (of 22 total)

The forum ‘General’ is closed to new topics and replies.