iSixSigma

jediblackbelt

Forum Replies Created

Forum Replies Created

Viewing 52 posts - 1 through 52 (of 52 total)
  • Author
    Posts
  • #166576

    jediblackbelt
    Participant

    Actually there is.  The tubs come from different machines, but get mixed at the customer.  So I know what my percent overall fallout is, but not which machine comes from.  So I went this route trying to track down the machine problem.  Since then we have sampled parts off of each machine to track it down, but I wanted to “clean up my mind” to see how to go about this in the future for sampling purposes.

    0
    #160840

    jediblackbelt
    Participant

    Agreed.  I would think about it this way.  If you use the Xbar chart with 1 week as your subgroup then you will tell difference from week to week and if there is a within variation, correct?
    However, you won’t be able to see if it is an issue that happens every Wednesday and then drops down.  The X-bar range will wash that out and your average could stay the same.  I am just a bigger fan of the XmR unless you really understand your Xbar chart. 
    my 2cents.

    0
    #158566

    jediblackbelt
    Participant

    For manufacturing… THE GOAL

    0
    #158565

    jediblackbelt
    Participant

    Wow.. tons of answers so I might as well give my two cents worth and maybe only that.
    Efficiency is based on actual versus standard. 
         Earned hours = pieces * standard hrs/piece
         Efficiency = earned hours / hours worked on standard
    Utilization = hours making parts / clocked hours
    Productivity = utilization * efficiency
    Just another opinion.  I have always looked at it as efficiency is based on a normal, engineered standard of expectation.  So while you are working against a standard you are performing at this level.  Then you calculate how you utilize your labor.  From those two things you calculate how productive you really are.  So you could be 100% efficient but only utilize your crew 50% of the day so you are 50% productive.  Now you can start to move your crews around in order to make your plant more productive and minimize labor cost. 
     

    0
    #153663

    jediblackbelt
    Participant

    Cool.  Not off base, just misread.  Good thing this isn’t a stats test.  Thanks for the clarification.
     

    0
    #153636

    jediblackbelt
    Participant

    Maybe I am reading his question wrong, but I read it as if I flip a coin 3 times what is the probability that only one of them is heads.  If that is the case then wouldn’t you use the binomial probability? 
    I come up with a .375 chance that flipping a coin 3 times that only 1 of the coin flips will be heads. 
    Am I off base?

    0
    #152460

    jediblackbelt
    Participant

    A lot of ways to handle things of this nature.  Timestudy is one way, but it will ignore the other duties that typically are done by the team.  So if all they do is repeatable and consistent then going with a timestudy probably would work well.  In my opinion when dealing with paperwork and processing it through a system you can take the times to get an idea on how long it takes to fill it out, but you can gleen much more information by doing a good work sampling study.  Then you also can figure out how much time it takes to process it through as well as see how much time is lost due to unforseen items that you may want to work on elimination first. 
    Just a thought.  Configure your work sampling study over the course of a couple of weeks and see what people are really doing.  It may surprise you.
     

    0
    #141777

    jediblackbelt
    Participant

    Seriously Pinto… if you have to ask then do you even have one?
    Maybe your new delta tau name should be flounder…

    0
    #141776

    jediblackbelt
    Participant

    You know what they say.
    If it looks like it, smells like it, it probably is it…

    0
    #140703

    jediblackbelt
    Participant

    John –
    Gotta love Tukey Tail count type of experiments.  Can’t say enough about them to everyone else.  How something so simple can save so much time in a project and then lead down to fix the problem.
    IE – You may also want to look into multi-vari charting against your production/quality defects to see if you have a problem across the board or if you can narrow it down to different inputs from shifts or operators or suppliers.
    Might save you a lot of time and money to do more investigation first versus jumping into DOE.  But several people have said it already – you may feel confident in  how you are measuring things, but do you know everyone else is just as good?  Start with MSA, I have found about 75% of all my projects usually end here with a correction of past bad practices when measuring the defects/defectives.
    Good Luck

    0
    #135926

    jediblackbelt
    Participant

    Nothing more than a Star Wars fan trying to become one with the universe and six sigma.

    0
    #135914

    jediblackbelt
    Participant

    Next step… try to define a little more of what you want.  Everyone here has enough spare time to go ahead and give you everything you need.  No thanks necessary. 
    Darth –
    I am in a middle of a project that needs completed.  Got any free time to go ahead and complete it for me?  I really need to get out to the Death Star and sacrifice some wookies.

    0
    #135912

    jediblackbelt
    Participant

    Try QIMacros.  He has a ton of stuff centered around excel.  Pugh matrix is in there.

    0
    #132841

    jediblackbelt
    Participant

    I would like to request somebody like a Darth, Stan, or Mike Carnell answer as well.  Their opinions in this forum are typically right on the money (insert large smooching sound here).  For those of us that have left companies as BB and done a lot of training of others and projects.  How do you gain the rank of MBB legitimately without having a company or training program “knight” you Sir MBB? 
    I can easily call myself a MBB and show my BB results, but how valid is it?  The opinions of good MBBs are very valuable here.  Do you have to know and master all tools?  What if you don’t know all the DOE tricks of the trade or understand off the top of your head all the calculations, but you can work a project and get results and you can work the stat packages fairly well.
    In hopes of good responses.
    Thanks,
    JediBlackBelt

    0
    #132527

    jediblackbelt
    Participant

    Hector –
    I would look at performing an Interclass Coefficient Correlation (ICC) study.  You can look information up by Don Wheeler you could also send me your email and I could get you an excel worksheet that explains it.  What you primarily are doing is comparing the variation of a known standard with what you are measuring and see how much variation is attributable to measurement error either through the system or the gage – dependingon how you run the test. 
    You would take the reading off of your machine and then compare it to the amount you are getting out each time.  Easy test to use.  Works well in a lot of situations including things like inventory and transactional studies.

    0
    #131840

    jediblackbelt
    Participant

    Even though I agree that this question is basically odd for anybody that has had any of the training I will try to give some response.  I figure by now you have been beaten down enough. 
    Use the tools where you need them.  I have seen projects completed at the MSA level because we found our process was fine, but the gaging and inspection was nowhere near accurate.  Work the project until you find the solution that you need.  Let the tools guide your actions Black Belt Boy. 
    Use what you need to use and don’t use what you don’t need.  No need to do a full factorial 3 level-4 factor DOE when you can get the results you need to by changing the gage.
    Take a refresher course and ask more questions might help as well.  Also another poster through out there a prop for a program.  Another group I would suggest is Promontory Management Group and their QuickSigma program.  Never used it, but have seen a demo and they were my Master BB’s.  Great group of guys.
    Good luck.  Stay in school.

    0
    #131412

    jediblackbelt
    Participant

    Looks like nobody else is jumping in so I will break the ice and start to take the heat because I also want to know a better way of doing it.
    What I have done is to perform repeats within the run.  So my response would be a percentage of failures.  So I may run one sample run and within that run sample 100 parts and find that 40 are bad. 
    The next run I would show 100 parts and 80 are bad.  Use those percentages as my output and then see what the DOE shows to give me the best result.
    I have seen several articles that float around for this, but have not read any of them and unfortunately there wasn’t anything else that I was taught through the training that works much better.
    Good luck and let the responses flow…
     

    0
    #124330

    jediblackbelt
    Participant

    How about trying to do a DOE on the gages and operators.  You could then come up with something to show whether or not it is a difference due to operators or gage or the interaction between the two.
    Just a suggestion too is that if you do a paired T-Test to also use only one operator.  Otherwise you are throwing in a variable that you can’t account for in order to determine your gage bias.

    0
    #124329

    jediblackbelt
    Participant

    How I have always attacked OEE for a cell would be to calculate the uptime based on total machine hours.  So if you have a total of 10 machines and they can all run for 8 hours you have a total of 80 machine hours to work from.  If you have a machine (non-bottleneck) go down for 5 hours you subtract the 5 hours.  If you have your bottleneck go down for 5 hours you lose those hours across every machine so you lose 50 hours of time.  This may or may not be true because of WIP, but of course you should be working on a one-piece flow and regardless time loss on a bottleneck will eventually hit every piece of equipment. 
    The other thing you could look at is rolled throughput yield.
    Just how I have used it in the past.

    0
    #122951

    jediblackbelt
    Participant

    Rachel –
    On a more micro (Cell) level I have looked at Lead time as the time it takes to get a piece from start to finish.  So I would tag a piece at the beginning of the process and then time it until that piece ends up in the ship container as a completed part. 
    The throughput I have always looked at as how many pieces / time period you are making.  So you may be running the parts at 60 pieces per hour and have a lead time of 10 minutes.  So you could go back to Littles Law (I believe) and show you have 10 pieces of WIP in the cell.  WIP = Lead time / throughput.
    Just how I have used it.  Not saying that it is 100% correct, but that is how I have always interpreted the Law.  Which gets you to the point of showing how a one piece flow is the most efficient method to reduce lead time.
     

    0
    #122950

    jediblackbelt
    Participant

    Darth –
    What I am really thinking about doing is the IMR chart and then rolling a Multi-Vari chart together to come up with daily/shift differences in set ups.  What are your thoughts on that method?  Also, when you mention a weighted average chart how would I roll that out?  Could I still use a vaiable grouping for my dispersion or would I be breaking some rules?  I haven’t had the luxury of dealing too much with the weighted average charts.  I would need to do more research on their applications probably.
    Also, want to thank you for your help to me and this forum.  Your opinion along with several others (Carnell, Stan, just to name a couple)that answer a majority of the real issues is a great help. 

    0
    #116146

    jediblackbelt
    Participant

    I would suggest the AIAG book to look up those things.  Or of course the numerous software packages or when all else fails do a google search for the topic.

    0
    #114528

    jediblackbelt
    Participant

    Take the total time to create your part.  Say you have a part that takes 4 operations in a cell to build and you have a takt time of 1minute.
    If the cycle time of A=1.0, B=2.0, C=.5, & D=.5 then your total amount of labor to build is 4.0 minutes.  Take the 4.0 / 1.0 and you get total number of operators = 4. 
    Make sure though if you put your allowances into your takt time for breaks/lunches then your total time is free from any allowances. 
     

    0
    #111326

    jediblackbelt
    Participant

    Chanda –
    The way I have always done utilization is a little crossed from the other responses.  It looks like one gave you efficiency, the other gave you utilization and the third gave you what looks to be rolled throughput yield.
    The way I have always seen and done utilization is work hours / clocked hours on job.  So if you worked 6 hours on the job and clocked in for 8 the utilization was 6/8=75%.
    If you want to find productivity out for the operation you multiply utilization and efficiency.  You calculate efficiency by finding your standard hours for a job multiply by pieces produced.  Take that combination and divide by manhours used.  So if your standard is 1 minute and you made 480 pieces you have 480 standard minutes.  If you used 2 people for 8 hours you now have 2*8*60 man-minutes or 960 man-minutes.  So you take 480/960 = 50% efficiency. 
    For productivity you take the efficiency and multiply by utilization.  So 50% * 75% = 37.5%.
    Hope this helps. 

    0
    #109461

    jediblackbelt
    Participant

    1.  Never heard of it.
    2.  Follow the AIAG guidelines and/or customer specifics for deciding how to rate your PFMEA.  Nobody “decides” what your RPN is a calculated number based on the Severity, Occurance, Detection.  Your Severity is based on (obviously) how sever the effect is if it happens.  In the automotive world you aren’t allowed much freedom to decide your ratings like you would using the tool in other ways – they pretty much decide it for you.  A lot of times you will need to link your PFMEA back to the DFMEA for a product.  If your DFMEA has a severity of 10 for something and you can get that by altering your process then your PFMEA will get a 10.  All your documents need to be linked from your DFMEA to your PFMEA to your flow chart to your control plan.
    Most of the time your customer will review your PFMEA during a launch review or especially on your first defect that is shipped into their plant.  The first question any good IQ manager at an automotive plant will ask during a problem you shipped them is “Have you reviewed your PFMEA and control plan?”  Be prepared… and may the force be with you….

    0
    #109460

    jediblackbelt
    Participant

    1.  Never heard of it.
    2.  Follow the AIAG guidelines and/or customer specifics for deciding how to rate your PFMEA.  Nobody “decides” what your RPN is a calculated number based on the Severity, Occurance, Detection.  Your Severity is based on (obviously) how sever the effect is if it happens.  In the automotive world you aren’t allowed much freedom to decide your ratings like you would using the tool in other ways – they pretty much decide it for you.  A lot of times you will need to link your PFMEA back to the DFMEA for a product.  If your DFMEA has a severity of 10 for something and you can get that by altering your process then your PFMEA will get a 10.  All your documents need to be linked from your DFMEA to your PFMEA to your flow chart to your control plan.
    Most of the time your customer will review your PFMEA during a launch review or especially on your first defect that is shipped into their plant.  The first question any good IQ manager at an automotive plant will ask during a problem you shipped them is “Have you reviewed your PFMEA and control plan?”  Be prepared… and may the force be with you….

    0
    #106772

    jediblackbelt
    Participant

    I have pretty much used PFMEA for the PA side of the coin.  What are the potential causes and then go after them.  We also do a lot of PA work with Kaizen suggestions for improvements.  Don’t get too wrapped up in the “well if they have already seen it then is it CA?”  Bottomline is to solve the problems at hand and as long as you are consistent in your techniques you can always argue for one side or the other.
    DMAIC is always a good process, but again don’t get hung up on if you Define a problem then isn’t it CA instead of PA?  Basic premise of all the tools out there is to find a problem, fix it, and then repeat the process.

    0
    #102846

    jediblackbelt
    Participant

    This is where I would calculate two ratios.  The one is what I posted earlier about the efficiency and then you can calculate utilization.  Take the two and multiply them to get productivity.  Now you are finding out if you are spending money making parts and you are finding out how good to standard you are at making parts.  You don’t pay for rework or scrap and you base it against a standard metric.  By multiplying the two you are now seeing how productive you are for the company.
    Efficiency = earned hours/time making parts
    earned hours = (part standard, time/piece* good pieces)
    Utilization = time making parts / time paid
    Productivity = Efficiency * Utilization
    Any problems with this calculation?  I believe this has been the IE standard when I was taught and has been around for years.  Not saying it is right (mass versus lean production), but would love some more discussion on thoughts for this subject.

    0
    #102757

    jediblackbelt
    Participant

    Alright, lets say the next shift comes in and produces 200 parts.  Now for two shifts I have 360 parts with 2 minutes per part totals 720 earned minutes and I have used 960 minutes of time.  So I have 720/960 or 75% Efficiency. 
    What am I missing?  Always up for some teaching especially if there is some improvements to make.

    0
    #102738

    jediblackbelt
    Participant

    I calculate efficiency based on earned hours – which is usually a standard way to calculate it in the IE costing world.  A part has an amount of time that it requires to make it.  This is the standard time and includes PFD allowances (Personal/Fatigue/Delay).  Every good part you make then earns you that amount of time.  You then take those hours and divide it against the time you used.  This is your efficiency. 
    Example.  A widget takes 2 minutes to make from start to finish (with all PFD added).  You make 160 good pieces in an 8 hour shift. 
    (160*2) = 320 earned minutes;  8 hour shift is 480 minutes.  SO….
    320/480= 67% efficiency.
    It is possible to go over 100% efficiency, but this leads to questions about the standard being accurate, did they run through breaks and lunches, etc…

    0
    #102220

    jediblackbelt
    Participant

    Zack –
    Company must have the firewall tightened up a little.  Can you send it here?
    [email protected]
    Thanks,

    0
    #102037

    jediblackbelt
    Participant

    Zack –
    Here is my work address.
    Thanks,
    [email protected]

    0
    #102031

    jediblackbelt
    Participant

    Zack –
    Sounds like you are really using OEE to its fullest.  I am very interested in the chart you use to present.  I have always struggled with a method to use it and still get information to improve my operations from it.  Any way possible I could get an example of your method of presenting it? 
    You are right, the way I came off with making it sound like it is just another metric wasn’t real good for somebody that may/may not be familiar with it.  I have always liked/used OEE, especially for capacity studies, but have always struggled using it as a single measure for improvements.
    Keep fighting the fight and maybe someday we will meet in the trenches.

    0
    #102009

    jediblackbelt
    Participant

    Zack –
    I don’t want to argue that OEE is/is not a critical factor.  I use OEE all the time the problem is it tells me I need to improve not where.  If I tell you that your OEE is 75% where do you focus improvement efforts?  It tells me I have 25% of my equipment effectiveness is available for improvement, but it doesn’t tell me where to focus improvement. 
    All I am saying is that the three individual measurements used in a scorecard or dashboard tracking system where I can monitor them individually for control and improvements is more value added.
    It is the same as RPN.  A RPN of 100 shows I need improvement, but are all RPN 100’s created equally?  Should I focus my engineering team to poka yoke a process because I get a RPN of 100?  What if process A is O=10, S=10, D=1 and process B is O=1, S=10, D=10.  If I kick a team of for process A versus process B I am making a mistake.  Process A is detected nearly 100% of the time and process B is never detected so I figure I should send my poka yoke team after process B. 
    The same is for OEE, you have to look at the underlying layers of the measurement to get true meaning on the metric.  The metric does not stand alone for anything other than a reporting tool.  The metric is good for upper management, but not those that are in the trenches fighting the battle.  Give me the three individual metrics any day versus the pooled metric.

    0
    #101984

    jediblackbelt
    Participant

    Billybob-
    The OEE measurement is a decent measurement of how your process is sitting and if you have any opportunities for improvement in the process.  The only problem I have with it is that OEE by itself doesn’t tell you anything on “where” to improve only that you have improvements to be made – kind of like RPN. 
    I use OEE, but don’t report it unless asked.  I would rather keep looking at the three deliverables that decide OEE – Uptime, FTQ, and Efficiency.  This gives you the areas to look at. 
    OEE is good, but you always end up asking the where can I improve question anyway.  So measure the three deliverables and know that you can calculate OEE at any moment. 
    Plus OEE can be fooled by your efficiency.  If you have a loose engineering standard you can have an inflated efficiency and skew the OEE to a more positive.  In this case if you only looked at OEE and it shows you are at >85% you would say you were world class.  The problem is you could be sitting at 80% for both uptime and FTQ, but have a very loose standard and have an efficiency of 133%.  So it could lead you to not focus on the problems. 
    Good luck with the replies…

    0
    #101284

    jediblackbelt
    Participant

    I typically use the Ftest to determine if I am eligible for ANOVA.  If the Ftest shows variances are equal then I use ANOVA for my results.  If I fail to accept the null on Ftest then I use T-test without equal variances. 

    0
    #97132

    jediblackbelt
    Participant

    Here we go again.  I personally am proud to say I am certified as a ASQ BB.  However, the floodgates will open as to what that really means.  But then again, like openings in the body eveyone has an opinion.  Bottomline is can you deliver results?  What do you want to prove?  True, anybody can study and pass the test, although I did think it encompassed a lot of good material.  But does that make it any more or less of a good leveling system?  But every “title” has a test associated with it and does carry some weight.  Everyone knocks it, but without some type of governing body how can you have a baseline.  I have been around GE belts that couldn’t decipher a trend chart.  Then again, I have been around GE belts that made me sit back in awe of their knowledge.  The same goes with everything not just six sigma.  You will always have good and bad in every career field.  As with everything you get out of it what you put into it and the end result for a black belt is can you deliver results.  If you can without a certification more power to you, if you have the certification it should be a good feeling of accomplishment.

    0
    #96442

    jediblackbelt
    Participant

    The question should be is rework a part of your normal process?  If for some off the wall reason it is then count your defects.  Rework is a NVA operation and should be avoided.  Adding additional rework defects to the normal process defects will skew the data and does not make sense with regards to lean.  Eliminate the rework is the goal and by reducing the parts going to defect is the way to do that.  Counting the efficiency of rework seems like an oxymoron.  Would your customer want to know how efficient your rework operation is or would they rather know how many parts come in to rework and what the plans are to stop this NVA operation.  If parts coming from rework are failing then your DPMO will raise accordingly anyway.  You should want to know if any parts reintroduced into the line are coming from rework because if they are then you are obviously having problems with rework to begin with.

    0
    #95387

    jediblackbelt
    Participant

    I guess that is what is confusing me.  I understand Rsq for my regression and correlation models, but for ANOVA when I am interested in the difference in means is where I am confused.  Is it telling me the basic same thing as Pvalue and Fstats – that there is a difference in means and I am certain of it?  I guess having a Rsq value for ANOVA I just feel like it is something new and I am not wanting to miss out on why it would be calculated for you and what significance it shows more than the pvalue or fvalue.

    0
    #93964

    jediblackbelt
    Participant

    Fort Wayne, Indiana uses six sigma in its city government.  Actually they are written about in a book about Lean Sigma for Service written by Michael George. 
    I didn’t read all of the other posts so if this is already mentioned then I apologize.

    0
    #93661

    jediblackbelt
    Participant

    When did they change to 6sigma?  I had not heard this was to be reported this way.
     
     

    0
    #90043

    jediblackbelt
    Participant

    Basically it is policy managment or planning.  I have seen it used to disseminate information on what the firms strategic plans and goals are and then integrate that or link it to how we are going to measure them and achieve them. 
    Bottomline:  Strategic Planning

    0
    #90041

    jediblackbelt
    Participant

    Very Good analogy.  So if that is the case then when I have a 10% gage R&R that means there is a chance of making a measurement system error of +/-10% around my gage reading.  So if I make a reading from the gage it could be off the percent of my gage R&R. 
    On that same note then if I use the P/T reading that should be the range I have of parts being possible good/bad around that same area. 
    Correct thought???

    0
    #90015

    jediblackbelt
    Participant

    I appreciate everyones responses it has helped me understand a lot of different methods now to adjust the spec limits to satisfy customer curiosity on how we protect them.  However, I am still confused on what the actual GageR&R% tells us.  Is it as simple as X% of our variation is from the gage?  Or is there something that you can relate that to. 
    Thanks especially to Mike for helping understand the P/T ratio better.
    Thanks,
     

    0
    #89881

    jediblackbelt
    Participant

    This is more of a question than a statement, but I was always taught if you use a P-Value in the ANOVA then you are assuming normality in the data.  Correct???

    0
    #87551

    jediblackbelt
    Participant

    What is your process?  Off hand without knowing anything about it I would start looking at a XmR chart.  Just a simple individuals chart.  I would say though that it depends on your process and how it reacts and how you want it to react, but for a simple chart in a majority of processes I have dealt with you can get by with an individuals chart pretty successfully.
    Good luck with the buffet and your BB.  Sounds like a real peach of a coach.

    0
    #86719

    jediblackbelt
    Participant

    Have you looked at a balanced ANOVA?  I have done this in the past has a multistep style experiment.  Some of the other replies are better off though.  I especially like the one where you run a design with a curvature check.
     

    0
    #86601

    jediblackbelt
    Participant

    Tom –
    I may be completely in left field but I read this as you are wanting to see if there is a difference in appearance if you do an operation or don’t do an operation.  If you are interested in another method that is quick have you tried a Tukey Tail Count test?  It is an extremely quick test that I use for a quick “is one method better than another.”  You can have the operator run 10 parts of each method and then rank them from best to worst.  The person ranking should not know which process is which.  Then count the ends until you change from one state to another.  If you end up with a count of 7 then you can say that the “better” process outranks the other process with 95% confidence.  Check out Keki Bhote’s book “World Class Quality” for information.
    Example:
    Rank from best to worst
    11112212211121212222
    Your tail count is 8.  You have 4 #1’s and 4 #2’s on the ends.  You can now say with 95% confidence that process 1 is better than process 2. 
     

    0
    #83845

    jediblackbelt
    Participant

    Can somebody say “disgruntled”.  You can say that every organization and every profession has its own set of “turds”.  That should come as no big shock to anybody.  As far as articles keep in mind most articles are written by academia professionals or consultants.  There are a lot of great articles put out there that are probably over the heads of a majority of shopfloor blackbelts.  Does that mean they have no merit?  I think not.  However you and I and those of us that run the floor and save money still should have an idea and a link to the academia world. 
    As far as the accredidation for the ASQ.  Have you taken the test and passed it for the CSSBB?  If not then how can you say that it does not warrant any merit?  Is this not just an opinion and I think we all know about opinions and that everyone has one.  I would put my projects and savings up against anyone out there.  Not bragging but the tools and how you apply them are what is important and not being certified.  So do I think I need a certification, no, but does the employers of future blackbelts and managers?  Most of them say yes.  With the market for BBs getting saturated with so many low key and quickie programs why not have a certification board that at least can help show that you have some knowledge and that somebody else can vouch that you applied the tools.

    0
    #83562

    jediblackbelt
    Participant

    Yep.  It was a bruiser.  Not extremely hard questions, but they ranged the entire body of knowledge quite extensively.  There were more calculations than I expected.  I still walked away feeling drained, but good about the expected results.  I think they did a good job of not making it so easy that everyone will pass, but if you are a decent enough BB then there should be no issue.  Big hint… I thought the QCI primer and Breyfogle’s book (implementing six sigma) were big helps.  Breyfogle has a new revision coming out soon and I am excited to see all that he has added in there.  Pyzdek’s book is a good one as well.  I took those and my training manual in the exam and used them all.

    0
    #83561

    jediblackbelt
    Participant

    I would avoid using the Cpk as a metric indicator for OEE.  What would the specifications be?  I would just trend out the OEE and show improvements based on that trend analysis.  For sample size I would just estimate how much of a shift you want to be able to detect.  For example if you want to detect a 1 sigma shift then put difference and sigma both at 1.  Of course Power is the 1-beta of error you want to accept. 
    Not a big fan of OEE though.  Good measurable but if I tell you that your OEE is 70% then what do you go and work on.  It tells you that there is 30% of improvement to be had, but doesn’t point you into the direction to improve.  Still need to look at the three underlying metrics to have value.

    0
    #83516

    jediblackbelt
    Participant

    Take your data sets and check them for normality and then equal variances.  If they pass then use ANOVA.  Otherwise you could just do a simple one-tailed t-test to see if they pass or even ANOM.  There are a lot of different tests you can use.  The bigger question is why did the system get abandoned?  You shouldn’t even be checking for a change for the worse you should be asking those in charge of the process why they allowed it to drift back.  Look at the controls that were put on the better process and see what needs improved to make the input controls better and then monitor the output.  Just an opinion but I would look to ask why the process changed versus proving statistically if the changes are valix.

    0
Viewing 52 posts - 1 through 52 (of 52 total)