iSixSigma

Dumb Control Chart/Baseline Capability Question

Six Sigma – iSixSigma Forums Old Forums General Dumb Control Chart/Baseline Capability Question

Viewing 16 posts - 1 through 16 (of 16 total)
  • Author
    Posts
  • #49862

    SiggySig
    Member

    So let’s say I have a dataset for a process performance over the course of 1 year. Control chart shows the presence of special cause variation in several cases (points >3 sigma from mean).3 sigma from mean).3 sigma from mean).Would you eliminate those points to calculate process capability and baseline performance or keep them in the dataset?

    0
    #171202

    Stevo
    Member

    I leave them in.  In too many cases I’ve seen “special causes” happen over and over and then you have “common cause”.  But if your pay or promotion depends on it, by all means take it out.
     
    Stevo

    0
    #171203

    SiggySig
    Member

    That’s what I thought too – I never “throw out” data, unless it’s truly from a different process (only had this happen once, however).Thanks Stevo

    0
    #171206

    Tim
    Member

    Perhaps an equally silly question – does it really matter?  If you’re looking at process capability based on a year’s worth of data, does the inclusion or exclusion of the relatively few data points change the respective values? 

    0
    #171209

    SiggySig
    Member

    Tim, actually it can make an enormous difference. I’ve seen several such datasets in the transactional world where the presence of special cause can shift the mean or median baseline performance significantly. I think where you may be off in your logic is the assumption that we’re talking about “relatively few datapoints”. Definitely not always the case.

    0
    #171210

    Tim
    Member

    O.K. – I can accept that.  I suppose my assumption was that there was control charting all along, not that there’s just data that is now being charted and special causes assigned.  Thanks. 

    0
    #171211

    SiggySig
    Member

    I see your point – no, I’m talking largely about the measure phase of a DMAIC project in a new deployment (no prior process measurement really to speak of)I wish yours was the scenario I’m dealing with, but, sadly, it isn’t. We’re not nearly that together yet. Management still treats common cause like special cause.

    0
    #171215

    Severino
    Participant

    In order for you to complete a process capability study you need to ensure you have met the assumptions.  Namely:

    That your distribution is normal
    That your process is in control
    Based on your post, I don’t think you can really claim the process is in control.  The reason this is important to any measurement of Cp, Cpk, Ppk, etc. is that even though you are looking at historical data you are really trying to make a prediction about the future.  The problem then becomes that you cannot make a prediction on a process that is not stable… at least not an accurate one. 
    There is a lot of legwork that should go into reporting process capability.  Even if it seems trivial.  Be sure to perform a Gage R&R and develop a reaction plan for your control charting.  You want to be confident that your measurement system is good and that you can correct problems before they slip out of control.  Once you’ve done this and have verified that your data is normal (if not, consider nonparametric or transforming your data) you can provide a much better estimate of your process capability (and yes leave all your data in the measurement).
    Finally, remember that it is just that… an estimate.  So be sure to always report your process capability indices with a confidence interval around them. 

    0
    #171238

    SiggySig
    Member

    No, I’m certainly not claiming it’s in control. What I’m questioning is whether to include special cause/outliers in calculating a process baseline for the purpose of setting a project goal in a DMAIC project.Sounds like the answer is yes. Thanks.

    0
    #171241

    Stevo
    Member

    Jsev607 –
     
    I shouldn’t argue with you, because it sounds like you gave a great text book answer.  But today’s bee in my bonnet is people making things more complicated then it needs to be.
     
    In most (93.4%) of our processes we do not need this level of confidence before we make a decision.  By going through all this work, sounds like overkill.  Just throw it into a control chart, put your thumb in front of your face, close one eye and see if it looks good.  
     
    You suggest transforming the data (I’m not a big fan) or nonparametric, performing a gage R&R and then you leave yourself a safety net and say it’s just an estimate and always put your confidence interval.
     
    I certainty do not want to water or dumb down this too much, but if we are trying to make quality for the people by the people, we need to adjust how we do things.  I want every person to make a decision with data.
     
    Stevo
    Unwritten and unpublished author of “Pull you thumb out of your @$$ and other motivational sayings”.

    0
    #171244

    DaveS
    Participant

    SiggySig,
    The best advice I’ve seen for this situation and what I have repeatedly done is:
    1) Eliminate outliers ONLY if you have a direct assignable cause and have instituted controls to eliminate them.
    2) Analyze twice. Once with and once without outliers, it is amazing how often, particularly on larger data sets, the PRACTICAL conclusions and effects are negligible between the two. With today’s statistical packages, this dual analysis is literally a few clicks of the mouse.
    3) If there are differences between the with/without then make conditional statements. e.g. if these outliers are actually common cause capability is 1.0, if not and we can find and eliminate the special causes it is 1.5.
    Finally, most software packages do non-normal capability assessments quite easily. Advice that you have to have normality to proceed is worth exactly what you have paid for it on this forum. If you don’t have the packages, see Bothe “Measuring Process Capability” for a method you can do by hand.
     
     

    0
    #171246

    SiggySig
    Member

    Stevo, thanks for chiming in here. My experience thus far as a BB working in the transactional world, is that I almost always have to err on the side of pragmatism over theory. Many MBBs I encounter seem to approach things exactly the opposite – textbook over reality.What got me started on this was a comment relayed to me by a business leader that we should “throw out” data from several months because there’s a special cause that is going to happen every year, that can’t be fixed, and that an MBB here had agreed with that approach. I don’t buy, first, that this special cause can’t be mitigated, nor to I buy that I should just ignore data because it’s going to look worse than the rest of the year. Don’t we want to know if the uptick in defects due to this special cause is getting lower year over year?Lastly, I am almost always dealing with processes that generate non-normal data (costs, cycle times, etc.) – the normality assumption in practice doesn’t seem to matter much, except when setting proper control limits for special cause. (I actually blogged on that very subject over on the isixsigma blogs a while back.)

    0
    #171250

    Outlier, MDSB
    Participant

    Normality is not a prerequisite for an SPC chart.

    0
    #171251

    Tony Bo
    Member

    Normality isnt an issue when dealing with SPC charts. 
    Additionally…if your business leader says there is a special cause that happens EVERY year….then I definitely wouldnt throw out the points.  I think you are right on point in disagreeing with your business leader.   

    0
    #171253

    Severino
    Participant

    Steveo –
    I agree wholeheartedly that practicality and common sense should always rule over the textbook and that I have often had to deviate from it myself in the interest of time, resources, etc.  I suggested the path I presented only because it represents the robust method and I have seen a lot of bad capability studies out there.  I even created a few when I was less experienced. 
    Years ago, most people had never heard of Cp or Cpk.  Now everyone in the world sees them as a panacea.  I’m not suggesting the op or anyone else on these forums is guilty of this, but I’d rather point to the textbook and let people knowingly assume the risk by deviating than let them walk into management saying everything is fine only to end up on the unemployment line the next week.
    If the Gage R&R isn’t easily achieved or it is a well established method, by all means don’t do it.  If the data isn’t normal or in control, that’s fine too.  Just do yourself a favor and put a section on your report labeled “Limitations” and state what assumptions you didn’t validate.  This covers you, it’s proactively making others aware, and you can always throw it on the short list (sarcasm) of things to do when you have more time available. 
    Finally, if you do have the time and resources available to you go through all the motions.  Get yourself some process flows or SIPOCs.  Do some FMEAs, cause & effect, etc.  Peroform some DoE’s.  Generate control charts, develop reaction plans.  Go the full nine yards.  As six sigma practitioners, more often than not we are going back to correct processes that are less than optimal because somebody did cut corners (whether voluntarily or involunatrily).  It doesn’t make sense to repeat the same mistake if you can help it, but we’ve all been in situations where we’ve had to draw the line due to external pressure.  Best of luck to you Siggy. 

    0
    #171353

    Rhineg
    Member

    Agree with you whole-heartedly that you should NOT “throw out” data, just because it is inconvenient to a business leader. Ask the business leader whether the customers’ expectations are different during that special time of year. The process needs to meet customer expectations every time the customer has a need.

    0
Viewing 16 posts - 1 through 16 (of 16 total)

The forum ‘General’ is closed to new topics and replies.