Six Sigma – iSixSigma Forums Old Forums General Center v. Spread

Viewing 13 posts - 1 through 13 (of 13 total)
• Author
Posts
• #47086

newbie
Participant

Once you determine your project Y, how then is the decision made regarding the need to center the process verses reducing its common cause variation?
What tools provide insight into this decision?
A very brief example would be awesome.
THANK YOU!!

0
#156550

Putnam
Participant

Depends on the system and problem.  A dot plot works for simple things.  Analysis of means can be helpful with multiple cavities.
Generally, if your data is all over the map but centered – reduce variation.  If the data is nicely clustered but out of spec – change the mean.  If you’ve got scatter and are off target, you’re better off centering and then reducing variation.  That said, the last time out I reduced and then centered, so do what makes the most sense to accomplish the end goal for a given project.
Mike

0
#156552

Snow
Participant

So running a quick graphical assessment (dot plot, box plot, histogram), superimposing the SL and target, and then making a decision based on what you see is an acceptable methodology here?

0
#156569

Jim Shelor
Participant

Newbie,
Run a capability analysis.
If Cp < required Cp, You have a common cause variation problem.
If Pp << Cp, you have a total variation problem usually associated with special cause variation.
If Cpk << Cp, you have a centering problem.
If Ppk << Cpk, you have a total variation problem, usually associated with special cause variation.
If Ppk << Pp, you have a centering problem.
If Ppk << Cp, you have a centering problem, total variation problem, or both.
Example:
Cp required = 1.7  Actual Cp = 1.5, you have a common cause variation problem.
Cpk = 1.4, you have a centering problem.
Pp = 1.2, you have a special cause variation problem.
Ppk = 0.9, you have both a centering and a special cause variation problem.
For deciding which problem to work on first, generally I look at the difference and work on the one that is causing the largest difference first.  This is also tempered by the one that costs the least and is the easiest to fix.
I hope this helps.
Jim Shelor

0
#156577

newbie
Participant

Jim,
You are a gentleman and a scholar…thanks bucu.

0
#156600

Mike Carnell
Participant

Newbie,
You need to slow way down. If you have just figured out your Y then you are in the Define or measure Phase which should be focused on the Y. Your solution is in the x’s and you are long ways from that. If you make some obtuse decision at this point based on your existing data and have not even run a MSA study you have no clue what you are even dealing with. The entire project could end up being the MSA.
Good Luck

0
#156601

newbie
Participant

Thanks Mike, for the response.
But i am still confused….please bear with me here…..
lets say the data collection plan is in place, potential causal inputs have been identified (KPIV) and the Project Y (KPOV) has been agreed upon.  So I have my charter, pmap, ctq drilldown and data collection plan complete with sampling plan, stratification strategy, and proposed analytics.  I run the MSA (assume the MS is competent) and now am in a position for data collection (yes or no?)
I gather the data as outlined in the data collection plan, and am now ready to run the basic analytics of:  stability (run chart – CC), normality (Normal Probability Plot), Spread and Center.
Ok…so (assuming) I  have clean data that is normally distributed from a stable process, and when a capability analysis is run, it is determined that the process is either uncentered or replete with common cause variation…..now what?  Is it a matter of looking at it from the perspective of what action will reduce the amount of out-of-spec product more, process centering or variation reduction?
Or is it a common sense approach of saying, ok, lets see if we cant get our process centered a bit more, remove some CC variation, and talk to our customers about relaxing the SL a bit (theoretically, to make the point)?
Sorry for writing a book here and thanks so much for your time….just trying to get a handle on the sequening of the skill set and how to determine the course of action when both spread and noncenteredness become an issue…..If you see anything wrong with my thought process PLEASE comment…..

0
#156606

Mike Carnell
Participant

newbie,
Not a problem on how much you write. The more you write the more we can understand about the process. You also need to understand that you may get lots of feedback from people that don’t agree and that doesn’t make anyone wrong it just means people have different approaches (watch I will irritate Darth before this is done).
Assuming you have all that stuff in place and you have run the MSA and it is acceptable (that is a big assumption – without knowing what or how you are doing it there is a pretty big chance it won’t make it). We will assume it has.
You can worry about stability if you choose. I have never paid much attention to it (that is the part). You will have x’s that move the mean and x’s that affect the variation. That means you need to understand which variables do what. You also need some sense of priority. You can use the FMEA to synthesize your data from the Measure phase. There is a logical correlation to the catagories that are in the FMEA and the Measure tools i.e. Capability studies can give you the Occurance numbers, etc. It doesn’t have to be this brainstorming love fest that most people make it. Once you put some data to the FMEA the RPN actually represents the knowledge you gained in Measure and it lets you build a plan for working the process. The Analize phase is testing the assumption around the RPN’s. If someone says that machine A runs better than Machine B that is a hypothesis test, supplier A vs. Supplier B, etc. Think about that – supplier A runs different than supplier B but you use both. What is the effect? You get a wider spread so you now know a piece that causes variation (maybe even one of those variables that affects the infamous stability but rather than spending years getting ready to get ready to do something you are moving everything forward). Grab a GB and give them a project to make the suppliers run the same (if you don’t have GB use a Supplier engineer so the next time you get the infamous “we’re working with the supplier” that doesn’t mean they went to dinner with them it means they actually created some value for you – novel idea). The value to you is the knowledge you just got and when the GB gets done you will understand what characteristic in the supplied material affects your variation. Now your supplier knows what is important to you as well.
From this you are going to start learning what your x’s do and don’t do. A guidline – knob variables move the mean (feeds and speeds, wave height, temperature, etc.) variation comes from everything else – that is a crude rule but seems to work.
You need to understand something very clearly – not all processes should be centered. Actually very few process are optimized when they are centered. Going back to an old example – if I am plating gold do I want it thick or thin. From the business perspective I want it thin from the customer spec I don’t want defects. I want to run that process as close to the low side as possible without running the risk of shipping defects so my goal would be the mean = LSL with a std. dev = 0. That won’t happen so you need to set a target that makes sense for the current process issue. If my spread wider than the spec then centering makes sense and make sure I have containment in place to chop off the ends so I ship good stuff while I work on reducing variation (there was a string on the Blog site similar to this). If the process has a reasonable spread I want to push it to the place that makes the most business sense – not necessarily the center.
If you are in discrete manufacturing you will find that most of your spects are legacy type numbers. Don’t be afraid to question them. Look at the MRB data and see what is getting bought off and shipped and look at customer returns. That will tell you a lot about what is and is not causing pain and what is and is not a legitimate spec.
This probably hasn’t really set your mind at easy about what to do. There isn’t a recipe as much as people would like it to be. Your comment about common sense is probably pretty accurate – get as much of your process running inside the spec as possible (find the x’s that move the mean and get the big part of the histogram inside the spec limits) only for the reason that you are protecting your customer. Get containment on the stuff that is out of spec – that takes the emotional charge off the issue and protects the customer. If you have all that decide where the process needs to run because it probably isn’t the center and start moving it there (look at some of the stuff around Cpm or if you choose to sidestep the arm wavers on this issue go back the really old stuff on Cpt – there is a contingent that likes to spend their efforts telling you that SS is crap and you need to run to targets – nice dogma but it doesn’t tell you anything about how to do it).
Understand the important stuff is understanding how the process works. Understand what x’s control the mean and what x’s control the variation and what variables interact. After that it doesn’t matter what anyone wants you to do with the process you can make it run any place they want you to run it. That makes you much more powerful in you industry than the person who did a project to get it to the center with some reasoable std dev and then just moved on. If something changes they need to run amother project. Look the mining companies – the input was created in nature thousands of years ago. Does it make sense to optimize around a particular set of variables such as head grade or to understand the process so when head grade changes (which it does constantly) you know exactly how to prcess that material.
There is this huge mistake people make when they take on a SS project. They want to “fix” it right away – what we call the sitcom mentality – Americans have watched sitcoms for so long that they believe that when they are presented a problem it has to be solved in 30 minutes. It has taken them years to get processes this completely screwed up so taking 90 days to unscrew it won’t kill them. If you approach the issue from understanding the process and how the x’s play together then you know how to make them play nice regardless of what your customer wants.
Alright I think I have now written more than you but I don’t seem to have SteveO to give me a bunch of crap about it. I hope I have not caused your confusion to be worse. Please let me know if I have and I will do my best to straighten it out.
Interesting thread. I admire the fact you are willing to walk into this with this question. I am a little confused myself because people have done similar things in the past and gotten blasted. You are relatively unscathed at this point. The gods must be smiling on you.
Just my opinion.
Good luck.

0
#156610

Mike Carnell
Participant

newbie,
Link to Robin Barnwell’s Blog that is a similar issue to yours.
http://blogs.isixsigma.com/archive/treat_the_symptoms_not_the_cause.html
Good luck.

0
#156632

newbie
Participant

Mike,
I cant thank you enough for taking the time.  I can teach myself the individual statistical tools, but the art of their application I can not.  Thanks again to you and all the SMEs for taking the time.  For those of us operating solo, it is invaluable.

0
#156641

Mike Carnell
Participant

newbie,
You are welcome. I hope something in that made sense.
Good luck

0
#156654

Participant

Adding  to  that Cp,Cpk are  for  short  term  and Pp,Ppk  are  for  long  term,you  have  also  to  knoe  the  difference  in  calculating  where you  use different  formula  for  the  SD?

0
#159611

newbie
Participant

Just read your post again…for the 6th time….really informative…Just wanted to thank you Mike.

0
Viewing 13 posts - 1 through 13 (of 13 total)

The forum ‘General’ is closed to new topics and replies.