iSixSigma

MBBinWI

Forum Replies Created

Forum Replies Created

Viewing 100 posts - 1 through 100 (of 1,745 total)
  • Author
    Posts
  • #251150

    MBBinWI
    Participant

    @rbutler – a function of the computer generation.ย  If they had grown up with a slide rule, they would understand.ย  ;-)

    0
    #251149

    MBBinWI
    Participant

    @rbutler – well said!

    0
    #251046

    MBBinWI
    Participant

    @Marc68 – I’m going to take exception to @cseider and @poetengineer in their guidance on this issue.ย  When determining calibration frequency there are two items to evaluate – the bias and precision.ย  While an MSA can help, it also introduces issues related to the human element, but that is not what calibration is about.ย  Calibration is about the ability of the instrument itself (regardless of human interaction) being able to report a valid value and report with a level of precision required.ย  In order to evaluate this, you should periodically use a known value specimen to evaluate your measurement device.ย  Does it continue to report an accurate value to the level of precision required?

    Typically, calibration is a function of mechanical processes.ย  Wear and physical changes of the measurement mechanism.ย  In systems where there is little of this phenomenon, the calibration period should be set appropriately.ย  This is more a function of physics and mathematics than it is of risk (FMEA).ย  The setting of risk level acceptable can help to establish the values to be used, but not the calibration frequency.ย  That is a function of physics/math.

    Just my humble opinion.

    0
    #251045

    MBBinWI
    Participant

    @ahamumtaz – You should go back and research the definitions of Cp, Cpk and Sigma that were provided to you.ย  You should find some verbiage that states something like: Cp is “potential” capability, whereas Cpk is “actual” or “observed” capability.ย  The difference being the location of the distribution in relation to the target and upper/lower boundaries of acceptance.ย  You cannot have a negative Cp, since that definition does not include the location issue (using absolute values).ย  If you use Cp as your measure for Sigma, then you could have the scenario you describe – but this would be wrong.

    0
    #251044

    MBBinWI
    Participant

    @derekkoz – as the learned @rbutler identifies, once you limit the number of decimal places you are going to use, you have created a discrete measure.ย  The key is whether the number of decimals is sufficient to provide the resolution needed to answer the question being investigated.ย  Now, don’t get me wrong, there are certainly absolute discrete measures, but even continuous measures are functionally discrete once you limit the number of decimals.ย  The real question for you is to what level of precision do you need in order to answer the issues you are looking to understand.

    I don’t know if there is any proof of this, but generally you should have one decimal greater than the number of significant places that you are looking to analyze.ย  So, if you are trying to answer a question where the precision is to the 5th decimal place, and you are able to accurately and precisely measure to the 6th decimal, then you should be fine.ย  If not, then you are going to have issues.

    This level of precision is one that I have found in industry.ย  At the outset of an improvement effort, a rather gross measurement scale is sufficient because the issues are rather large.ย  As these are reduced, the measurements need to become more precise in order to discern the differences and make further improvements.

    Good Luck.

    0
    #251043

    MBBinWI
    Participant

    @geozor – A good approach would be to do a root cause analysis, identifying the several contributors to the issue (a fishbone diagram would be a good approach).ย  Categorize the several contributors as to their likely impact.ย  Then select the one that is the greatest contributor and go after improving it so that it either no longer is a contributor, or the contribution is lessened.ย  Then attack the next greatest contributor.ย  And so on.

    0
    #249648

    MBBinWI
    Participant

    @lucaspaesp – a lot of good discussion here, and I hope you’ve learned some things.ย  Fundamentally, you have a measurement system that doesn’t have sufficient precision.ย  There are several techniques that I might suggest, but the easiest is to use the raw data instead of sub-groups (as it seems you plotted in your graph #2).

    0
    #249647

    MBBinWI
    Participant

    @Mike-Carnell – and don’t get me started on the whole “1.5 sigma shift” bullshizzle.

    0
    #249646

    MBBinWI
    Participant

    @cseider – so, you’re telling me it aint?ย  Oh, I get it, +3 and -3 must be Zero sigma ;-}

    0
    #249645

    MBBinWI
    Participant

    @rbutler – Big Brother is always watching ;-}

    0
    #249644

    MBBinWI
    Participant

    @rbutler – and simple algebra at that ;-)

    0
    #249643

    MBBinWI
    Participant

    @irfan.ahanger01 – I’m certainly not going to disagree with @rbutler (good to see you’re still here, trying to educate the masses!), but I would take a more pragmatic approach.ย  You already have a design with the center points, so you need to add the Star Points.ย  For each factor, you will need two – a high and a low, set at an alpha level – to augment the data you already have.ย  Each of the other factors should be set at their center point when paired with the Star Point factor.ย  Five factors, times 2 new levels each, is an additional 10 runs.

    If you set up a Central Composite Design in Minitab, they also up the overall number of true center points to six total for a total of 32 runs.

    I’m just a neanderthal compared to @rbutler, but that’s how I would do it.ย  Just my humble opinion.

    0
    #249642

    MBBinWI
    Participant

    @AR19CHU91 – you might get more of a response if you posit some of your own perspectives instead of just throwing something out and asking for comments.ย  As I have said here many times (albeit not recently) – do your own homework.

    0
    #249641

    MBBinWI
    Participant

    @Karthikdharmalingam – What is being identified is a measurement system error.ย  In this situation, each internal audit sample that is taken and subsequently subject to the ATA, this can be used to identify a confidence interval on the data provided by the internal audits.

    1
    #249640

    MBBinWI
    Participant

    @AlonzoMosley – A couple of things come to mind.ย  1) If resources are constrained (and when aren’t they?), then the organization needs to prioritize the allocation of those scarce resources.ย  If you have several processes that all could benefit from improvement, often your best allocation is towards the one most constrained – so long as you can actually sell the improvement (but a fundamental premise to being constrained is that you CAN sell the increased capacity, else it really isn’t constrained).ย  2) Another question that needs to be investigated is whether the scarce resources would be better applied to making other, non-constrained but lower return processes more efficient and therefore provide a better return.

    Too many Lean/Six Sigma practitioners fail to examine these two points.ย  They see defects or poor processes and immediately look to apply critical skills to problems that won’t improve output or really improve the bottom line.ย  If you look at my two points above, item 1 deals with increasing the top line, and 2 deals with increasing the bottom line.ย  One or the other should be the primary focus.ย  I’ve seen too many instances when improvement was made that had negligible effect on either and the organization ends up getting a negative view of LSS.

    If you haven’t read it, I would recommend “The Goal” by Eli Goldratt to get a better understanding of how constraints should be analyzed.ย  There is a new(er) perspective of first using constraints evaluation to identify WHAT needs to be addressed, and then LSS or other improvement methods are properly applied to make the improvement needed.

    At least, in my humble opinion, that’s how I would react to the situation you posited.

    0
    #245863

    MBBinWI
    Participant

    @Andy-Parr – Heck, I’d PAY to have that certification myselfย  ;-}

    0
    #245813

    MBBinWI
    Participant

    @Andy-Parr, @Mike-Carnell – good to see you both are still here.ย  Funny how some people latch on to something and their perspective totally vanishes.ย  Oh, well.ย  I tried.ย  Must be getting soft in my old age – I didn’t even look to take out the flame-thrower.ย  Best to you both.

    1
    #245748

    MBBinWI
    Participant

    @stephanieareid – you really are a one-trick-pony, aren’t you?

    Each methodology has its utility and none are the be-all and end-all.

    For others reading this (old) thread, here is some nugget of learning I have managed to dig up over the years.ย  Different problems require different approaches.ย  Lean, as our friend stephanie seems so very fond of, is good for focusing on elements of a system that are inefficient or causing waste.ย  Six sigma (the DMAIC kind), which Stephanie seems to like to disparage, is very good at attacking one form of waste – defects or mistakes.ย  One of the failings of Lean and Six Sigma is that they don’t inherently point you to the best place to apply limited resources.

    Now, if you had unlimited resources, you have no problem.ย  Just use the tools and people liberally to attack everything.ย  Since that isn’t the reality for most organizations, you need to prioritize what to deal with first.ย  This is where TOC comes in very handy.ย  You see, not all process improvements actually provide improvement that the organization needs to realize benefits.ย  Yes, you can SMED the heck out of the operation, but if you didn’t SMED the process that is your bottle-neck, you are unlikely to realize much in the way of actual savings (and will have pissed many people off in doing a bunch of work that doesn’t really seem to make any important impact).

    And, @cseider, I would propose to you that Drum-Buffer-Rope is a process improvement method – it is the one that aligns the cadence of the operation and helps to identify when problems arise that need attention.

    But that’s just my humble opinion.

    1
    #245747

    MBBinWI
    Participant

    @Tipman – I’m going to disagree with @straydog that creative people are likely to resist.ย  My experience is that they need to be shown how various tools help them to do what they inherently know they need to do.

    I’m not sure just what you are having trouble with.ย  Is it identifying new items to pursue (you will need tools that foster innovation)?ย  Is it identifying acceptance limits for items that you have created that will satisfy customers (QFD, benchmarking, customer trial are good techniques)?ย  Or is it making an item insensitive to manufacturing, customer use, abuse, environmental conditions (Robustness will be the approach you want to apply)?

    These are areas which are covered by DfSS.ย  I would not try this on your own unless you are willing to do a lot of study and practice many iterations.ย  The toolkit which is DfSS is extensive and many of the tools and methods take several cycles of learning to become competent at applying.

    There are several good books on DfSS – but my favorite is by Skip (Clyde) Creveling.

    If you are really looking to pursue, find a good guide (consultant) who can guide you along the journey – and it will be a journey of several years, so don’t expect to flip a switch and voila, things are perfect.

    Good luck.

    1
    #245746

    MBBinWI
    Participant

    @rbutler / @cseider – good to see the two of you are still here.ย  Hope you are doing well.

    0
    #245745

    MBBinWI
    Participant

    @stephanieareid – you mis-interpret my comment.ย  I don’t doubt that they work together – I was merely inquiring how the original poster thought they work together.ย  Too many posters here look for others to do their homework (due diligence that they could figure out themselves if they just did some basic work).

    My years of experience show me that they do work together.ย  One of the wastes that Lean seeks to reduce is defects – which are often caused by being out of acceptable tolerance.ย  That is where six sigma can be applied.

    You sound like many zealots who latch on to their favorite approach/methodology/tool which becomes their hammer and every problem is a nail (the same can be said for many six sigma practitioners, by the way).ย  I would encourage you to critically evaluate the problem that you are looking to address and apply the correct tool.ย  By the way, Lean and Six Sigma are merely a couple of them.ย  You should keep learning new ways to solve problems – or better yet, create a new one where no existing ones seem to be adequate.

    Good luck.ย  And open up your mind to other approaches.ย  It will make you a better problem solver.

    0
    #240851

    MBBinWI
    Participant

    @andy-parr, @mike-carnell:ย  Sorry, it’s been a while.ย  Very busy dealing with my own brood.ย  Took on an NPD Dir role a couple of years back with a group of younger engineers who have little background/understanding of DfSS.ย  So, I’ve been busy teaching them and they have been teaching me about commercial ovens and cooking equipment.

    Best to you both, and to my friends at iSS.

    1
    #210010

    MBBinWI
    Participant

    @michaelcyger – not sure exactly what the left side icons are supposed to be doing for me. I have numbers next to the bell, the envelope, and the group of people. When I click on the envelope and group of people, it doesn’t take me anywhere. btw – using IE11 on Win 7 Pro.

    0
    #210009

    MBBinWI
    Participant

    @michaelcyger: Congrats. Looks nice. I agree performance is much better. Also agree with @rbutler that the forum listing missing is a real drag.

    @cseider
    : If you just type the at sign and the first few letters of the handle you are looking to tag, a dialog pops up to select from (even has their avatar/pic if they have one – very helpful).

    0
    #210008

    MBBinWI
    Participant

    @felixveroya: After 3 years, how have things been going? You didn’t mention what type of engineers you were looking to motivate. I assume manufacturing engineers. I have found that reward programs usually bring about cycles of fixing the same problem (it is easy to keep “fixing” the same problem instead of rooting out the cause to begin with). Manufacturing engineers typically want/need to solve issues for good as a fundamental part of their job. Recognize them for their achievements in your performance management program not through a financial incentive program. Just my humble opinion.

    0
    #210007

    MBBinWI
    Participant

    @Mike Carnell: Good advice.

    @Jessica
    : As Mike cautions, be wary of someone who is a religious devotee of a specific methodology. I like to see if the person is flexible enough to seek out new/different tools to help them solve their issues. For example, many BBs slavishly look to the lowest Cpk value or highest DPMO levels. Having studied Theory of Constraints (TOC) I learned long ago that there is one item in the system that is currently holding it back from better performance. Find that singular issue and improve it (using Lean, SS, SMED, whatever it calls for to bring about improvement). Once you have improved that, look for the next constraint and move to that. A BB who talks like that is one you should not let get away. Good luck.

    1
    #210006

    MBBinWI
    Participant

    Good points all. I agree that RTY is probably the best way to evaluate your overall quality. You apply DMAIC continuous improvement methods to improve process capabilities until the costs of better improvement outweigh the benefits. Then (or before, depending on the focus of the organization) you should apply DfSS to change the underlying design of the system to one less sensitive to the variations that exist.
    To go back to the original question – it really doesn’t matter the target level you choose, rather that you are measuring it. If you are measuring honestly, you can then have discussions with your customer and management on whether you are currently good enough (you aren’t). But more importantly, by measuring, you will see where you can improve and begin to take steps to get better in that area. You should never be satisfied with a level – there will always be defects that can be eliminated. You should be striving for a method to identify and help prioritize where to go after improvement as that will bring you the biggest bang for your investment. Just my humble opinion.

    0
    #210005

    MBBinWI
    Participant

    All good points. For something like this, I would choose a spider chart. Each of your main categories becomes a spoke from a central point. You plot the score from each along this spoke and then connect the lines. This helps to identify strength areas and weak areas. You can do the same for the details within each category. This way, you get the overall perspective, and have more detailed info to dive into for improvement.

    0
    #202082

    MBBinWI
    Participant

    @andycroniser – why don’t you provide your explanation and we can give you feedback?

    0
    #202053

    MBBinWI
    Participant

    @cseider – been quite busy, so haven’t had much chance to check out the board. Hope things are well with you.

    0
    #201717

    MBBinWI
    Participant

    @cseider – please check your private messages. Any interest?

    0
    #201573

    MBBinWI
    Participant

    @rubennicolas – depends on what you consider a “project.” At this point, you have already identified the issues, so Define isn’t needed. But essentially, yes, you attack each issue in order of potential results to be obtained. The biggest defect creator might not offer the best chance of reducing defects, so evaluate those you have identified for results of improvement and difficulty to improve. Choose the highest results with the lowest difficulty and proceed in order. Good luck.

    0
    #201570

    MBBinWI
    Participant

    @rubennicolas – also, in the last 4 weeks you seem to have an inverse relationship between input and yield. Perhaps at lower inputs the process is better able to convert, so as inputs go up the conversion (yield) goes down. Food for thought.

    0
    #201569

    MBBinWI
    Participant

    @rubennicolas – after only 1 bad week you are considering that your results have stabilized at 93%? I see 3 weeks prior that are above 96%. I don’t think that your process has stabilized yet. You may have had some of the improvements slide back to the old method. I think you really need to establish some controls and ensure that they are effective. Review your actions, ensure that they are still in place, monitor the process results while ensuring these are still in place. You may find that you have actually met your improvement goal but those who are carrying out the actions are reverting to old behaviors. Good luck.

    0
    #201568

    MBBinWI
    Participant

    @dando – not sure exactly what you are trying to evaluate here. Typically, interactions are caused by the inputs, not a result of the outputs.

    In trying to understand your query, I took your data and did some sample graphical analysis. I took what you identified as the X value (input) and some of the Y (outputs) and graphed them. I used two Y’s in a scatterplot and used the X value as a group variable. Two resultant graphs are attached. You will see that the Y’s a and b with the X input seem to change based on whether X is high or low. This would indicate that X high causes a different response in outputs a and b than it does when low.

    In the second graph, I used a and d. Here the response is nearly parallel, with the average response being lower but with a similar slope for X being low.

    However, your data has significantly more High values than Low, so this might be a matter of data overload of the high values.

    Not sure if this helps or not.

    0
    #201552

    MBBinWI
    Participant

    @djnrempel – as @straydog mentions, a control chart is used to evaluate stability. While you might get some use out of a control chart to evaluate changes caused by various programs, it would not be my first choice. I would instead be looking at an hypothesis test such as a two-sample t test, or perhaps more appropriately if you are applying a program change to the entire group, the paired t test.

    Since you are new to the statistics, I would suggest finding a mentor – you might check out a community college that teaches six sigma – to pose these questions where you can have a longer and more meaningful discussion.

    Good luck.

    0
    #201533

    MBBinWI
    Participant

    @rbutler – still very funny. And I stand by my statement that selecting a p-value sets a line in the sand (one that is absolute, not like our previous president).

    0
    #201527

    MBBinWI
    Participant

    @rbutler – very interesting. When I teach (taught, as I hope not to have to teach this again, being back in the practitioner phase) about the p-value, I always emphasize that statistics is shades of grey, but when selecting a p-value, you are selecting between black and white. You must be willing to identify a level of significance that if it is met by a shade over you will accept it, and a shade under you are willing to reject it. You cannot equivocate. Probabilities are not absolute. But p-values must be.

    I find your compilation of those who are unable to discern this reality very funny.

    0
    #201511

    MBBinWI
    Participant

    @mike-carnell – I’ll take your opinion over most people’s “facts.” Have a great Memorial Day!

    0
    #201506

    MBBinWI
    Participant

    @mike-carnell – sometimes people need to hear the truth. And sometimes that truth hurts.

    0
    #201505

    MBBinWI
    Participant

    Billy – I won’t speak for my friend @mike-carnell, but I’ve never taken a certification at face value. Even from the most respected certifiers, there is considerable variation. Those who rely only on the certifying body usually don’t know what they are looking for nor how to evaluate those they are contacting. Thus, they are attempting to use a third party to do that work for them.

    I, for one, am self taught. My “certification” was from a test that I wrote (I was already the MBB in residence) and the evaluation committee were the belts that I had taught. I have done alright without some “name brand” certification. But you need to do what you feel is right for you.

    0
    #201477

    MBBinWI
    Participant

    @tgroves-EXTA – what you seem to be describing is a rolling average reported weekly of data from the past two weeks. This is a reasonable method of damping out noise/variation that occurs at a periodicity larger than the reporting period. It is not an average of an average. Your approach is well within acceptable practice.

    0
    #201476

    MBBinWI
    Participant

    @mike-carnell – I profess, I cannot remember the instance, but am sure it was well deserved ;-}

    0
    #201475

    MBBinWI
    Participant

    @cseider @mike-carnell – it is a weekend to remember that our ability to pontificate without repercussion should not be taken for granted. So many had that ability cut short so that the rest of us could retain it. To the veterans out there – Thank You. To those who are the family members of those who gave all, our profound sympathy for your loss that allows the rest of us to remain free.

    0
    #201474

    MBBinWI
    Participant

    @cseider – are you disparaging my current home? Having grown up in Minnesota, the winter here is actually mild ;-}

    0
    #201473

    MBBinWI
    Participant

    @Straydog – good point. I’ve interviewed some that had “certification”, even ASQ, but couldn’t reason through one of my favorite BB questions – what do you do when you have a GR&R of 70% and a process with a Cpk of 1.7? They have been trained to react to certain values, not think and understand what the data is telling them.

    0
    #201457

    MBBinWI
    Participant

    @cseider – I don’t usually rant here, saving that for my better half. Living by the adage that brevity is the soul of whit, I try to maintain a minimalist presence. Although certain topics can get me to pontificating vociferously.

    0
    #201456

    MBBinWI
    Participant

    @BuckeyeGuy92 – as @straydog identifies, what you are getting are specifications, not needs or requirements. This is the outcome of HOQ 1, not the starting point. You might consider these as the inputs to HOQ 2 and proceed to break these down into the specific design requirements.

    0
    #201455

    MBBinWI
    Participant

    @tgroves-EXTA – so, what you’re dealing with is a system where you want to report weekly, but the system performance is on a longer time period, perhaps 2 wks in periodicity?

    0
    #201442

    MBBinWI
    Participant

    @Mike-Carnell – hi, back. Russia, I’m jealous. One place I’ve wanted to visit but haven’t had the opportunity. Work is good. Lots of challenges, but a team willing to learn.

    0
    #201441

    MBBinWI
    Participant

    @Mike-Carnell – one needs a good rant once in a while.

    0
    #201438

    MBBinWI
    Participant

    @Mike-Carnell – been a busy afternoon. Good to see your comments.

    0
    #201421

    MBBinWI
    Participant

    @TollemG – maybe concentrate more on selling instead of trying to predict sales?

    0
    #201416

    MBBinWI
    Participant

    @lefthooklacey – it is sad that your company was taken in by the worst of our industry. Since you already have some idea of what six sigma is all about, I would recommend that you embark on a course of self-study. I would start by reading Deming and Juran. Then for specific tools I would check out Pyzdek and Bothe. If you are a Minitab user for your statistical analysis, I would get the manual “Lean six sigma and Minitab” by Opex. I would also recommend that you hook up with a mentor. You might find one who is teaching at a community college or through ASQ.

    Good luck.

    0
    #201415

    MBBinWI
    Participant

    @Snandy – why should we do your homework for you? Why don’t you propose an answer and your reasons why it is correct and we will tell you where you are right/wrong.

    0
    #201411

    MBBinWI
    Participant

    You can also use a distribution fitting tool in CrystalBall.

    0
    #201410

    MBBinWI
    Participant

    @cseider – it’s a background fill pattern. Unfortunately the dots don’t necessarily line up with either of the axes. Perhaps @mparet can take this as input for a future release – would be nice to be able to plot dots instead of gridlines on graphs.

    0
    #201408

    MBBinWI
    Participant

    @johnpeters123 – if possible, query your questioner about why they think their answer is correct, and please post back here. It would be interesting to know their rationale. I have an hypothesis that the preparers of these tests are becoming less and less knowledgeable about the subject matter.

    0
    #201407

    MBBinWI
    Participant

    @rbutler – or the preparer of the question isn’t all that knowledgeable.

    0
    #201382

    MBBinWI
    Participant

    @Straydog – Is there really anything “new?”

    0
    #201381

    MBBinWI
    Participant

    @kaac87 – and from where did you receive your “certification?” You might want to ask for your money back.

    0
    #201380

    MBBinWI
    Participant

    @johnpeters123 – none of these statements “best describes Lean.” Lean is about eliminating waste. While producing at the rate of customer demand is called Takt time, which is a component of a lean production system, it isn’t specifically Lean. Likewise, reducing variation is a fundamental aspect of Six Sigma, but doesn’t necessarily mean that the reduction in variation is actually eliminating waste. Only if the variation was outside of acceptable levels would this reduction be eliminating waste. If the amount of variation is acceptable to the customer, the additional costs included to reduce that variation could be considered waste of its own kind.

    This is a very poorly crafted question. Good luck in explaining that to the instructor.

    0
    #201309

    MBBinWI
    Participant

    @DeanOK1969 – as @straydog identifies, Spec Limits have nothing to do with control limits. Your spec is what the customer will find acceptable. Thus, why would your customer not be more satisfied with outcomes above 90%? There should be no upper limit on the spec in this case. The data itself will determine your control limits, you don’t “set” them.

    I think a more salient question should be whether OEE is the proper metric. Is it sufficiently forward looking to be a good metric? Taking measurements on a mechanical/production process is typically good practice as the physics are constant and the system is stable. In the case of an airport, are the components of OEE constant and predictable? I think not. You have weather that can be highly variable and unpredictable, and airlines that have varying policies and standards that are also not standardized and uniform. I think you need different metrics. Just my humble opinion.

    0
    #201308

    MBBinWI
    Participant

    @Straydog – if the physical transformation requiring the time is unacceptable, that identifies a need to search for a different technology or different way to satisfy the requirements. This may exist, or may not, in which case a research project may be needed.

    0
    #201307

    MBBinWI
    Participant

    @8sigma – depending on where your data lies compared to the boundary condition (in this case the lower bound being zero), you may or may not have a distribution that is reasonably normal. If very close to the boundary (as yours appears to be), you at the very least have a clipped distribution (this would occur when you have an actual normal distribution, but some portion – that under the LSL for example – is excluded from the data set), or more likely you have a non-normal distribution. Time and physical boundaries can exist where zero is an absolute. If your data always resides sufficiently far away from the boundary (at least three std dev or more) then you should not run into a boundary issue with a control chart. If closer, then you can run into problems. And you always want to check for normality and evaluate for special cause issues.

    0
    #201303

    MBBinWI
    Participant

    @cseider – precisely correct, my friend.

    0
    #201280

    MBBinWI
    Participant

    @sambit.ximb – sorry, no attached file.

    0
    #201279

    MBBinWI
    Participant

    @YNot – Ideally what you would want to do is construct samples that have known and distinct defects – in this case reduced signal output at specific levels. You still want to have the operators in the system as they may hook up the boards differently, with different connection seating for example. Your MSA isn’t specifically looking at the correct I/O signal, but whether degradations of the signals are correctly interpreted by the test apparatus.

    0
    #201268

    MBBinWI
    Participant

    Theoretically, yes. However, if they were truly not working and waiting on answers, then this is not only inefficient, but wasteful. At the very least, they should be doing some training or other useful activity during such periods.

    0
    #201265

    MBBinWI
    Participant

    Ramesh: You would normally perform an FMEA on a process not on the entity as a whole.

    0
    #201253

    MBBinWI
    Participant

    @JamesM – I agree with @rbutler that a graphical answer results in a negative number (see attached graph). However, solving the regression equation with Dia of 6.96 gives a result well below the graphical answer. I’m going to ask @mparet to chime in, as this is a Minitab question.

    0
    #201241

    MBBinWI
    Participant

    @Pravin25 – you first should understand the cause of the spikes. Are these due to special causes, and thus are another process, or are they built into the existing process. Do these spikes, for example, coincide with a time of month, shift, weather phenomenon? Or is it a situation where some number build up, and there is a push on to reduce, so either more resources are applied, or the quality level is relaxed to push through the increase so as to reduce the level to “normal?”

    0
    #201239

    MBBinWI
    Participant

    @veejayshan – I can’t share an example, but can suggest some actions that you might want to take.

    – Have you benchmarked the various projects and methods that were used to develop the capital estimates?

    – What variability is there for the same individual/group in creating the capital estimates? Are there some individuals/groups that are consistently more accurate and conversely less accurate than the mean?

    – Are there characteristics of the projects that can be identified that have higher/lower variability? Perhaps those that have significant construction costs are always more variable, or those that use union labor are more variable, or …

    – One tool that you might want to investigate is Monte Carlo simulation. This method can help you to apply variability to the inputs and see the impact on the outputs. A good tool will provide you insight into the variables that are most sensitive to variation so that you can focus on those to get better data. I once used this method to identify a variable external to those directly controlled by the development team that was going to have a huge impact on the outputs. Justified spending quite a bit of money on understanding this external variable so as to ensure the project benefits were more accurately understood.

    Hope this helps.

    0
    #201238

    MBBinWI
    Participant

    @Vidhya30 – I’m not sure exactly what you are asking, but here are some thoughts. Just for reference, I’ve spent the past 8 yrs or so working primarily with large and small food processors – everything from cheese blocks and shredded cheese, to single serve coffee products, individual drinks, to cold cuts and smoked meats. I’ve also worked in industries making large capital equipment, automotive components, and commercial food service products. So, I have a perspective within and outside of the area which seems to be of interest to you.

    Let’s look at the overall issue of Lean and Six Sigma. Lean provides a perspective of eliminating wastes. These wastes, as identified in most Lean teaching, fall into 8 different categories. While Six Sigma can be applied to various effect in each of these areas, it is most directly applicable to situations where the variability of the outputs is larger than what the customers will accept and so variation reduction will reduce the waste of defective product. Lean also tends to have tools and methods that are easier to train and widely deploy, which makes the ability to impact the organization with smaller but more widely dispersed actions easier. Whereas, Six Sigma tends to have a more concentrated group of practitioners because the statistical tools require a higher skill level to master.

    Neither is the silver bullet (regardless of what some consulting firms may want to portray), rather you must apply the right tool to the problem at hand. Neither are Lean or Six Sigma the only methods that might apply. For example, neither really has a good method of identifying specifically where to apply limited resources to achieve the most impactful results. For that, I apply concepts from Theory of Constraints, where identifying the choke point (constraint) and improving the throughput for said constraint improves the overall system.

    Long story short, there is no specific methodology that serves all needs optimally. You must become adept at many various methods and learn which situations are most applicable to which problem solving method. That said, Lean is easy to learn, easy to widely deploy, and provides that ability to accrue savings across a wide swath of processes, so typically has a very good ROI.

    Hope this helps.

    0
    #201228

    MBBinWI
    Participant

    @Mike-Carnell – correct. While my response might have seemed flippant (and quite frankly, it was), as @katiebarry identified, there is no single tool. You must identify the problem, and then select the appropriate tool. None of them address every problem. I could have just as correctly responded “all of them.” As you coach, @ravikumar0423, you must learn to identify the problem and use your own abilities to search for an appropriate tool. That means doing some basic research, and Bing (or your preferred search tool) is likely your best first resource.

    0
    #201221

    MBBinWI
    Participant

    @ravikumar0423 – none of them.

    0
    #201211

    MBBinWI
    Participant

    @Mike-Carnell – good to see you back. Hope all is well with you.

    0
    #201202

    MBBinWI
    Participant

    Look at it this way – if you don’t apply CI, your managed services firm is going to continue to charge you the same year after year. You will not eliminate waste, nor streamline processes so that their services being provided are more effective/efficient. While you will be contracting for a service, you will be contracting for more effort (and believe me, they will bill you appropriately) than is necessary.

    0
    #201195

    MBBinWI
    Participant

    Peter: It looks like you are learning very basic two-level factorial design of experiments. Search this site (and the web for that matter) on those terms. I’m sure you’ll find plenty of information that will help you learn. If you still have questions, then come back here with specifics and we can clarify.

    0
    #201194

    MBBinWI
    Participant

    @aabousalem – do you have any statistical software tools available? For example, in Minitab v17, you just use the Stat/Power and Sample Size… and then choose the type of statistical test you are interested in. There is also the ability to get min sample sizes for acceptance sampling plans under the Quality Tools/Acceptance Sampling for… menu.

    0
    #201187

    MBBinWI
    Participant

    @lausto – From your description, I would imagine that your table was 2×2, defect/no defect on one axis, and hole/no hole on the other. Observed counts should be in each of the 4 cells. If this is what was evaluated, then your analysis was set up the way that I would have set it up.

    Are you evaluating all the components, or sampling? If sampling, are you sure it is a random sample?

    Do you have data in each of the 4 cell positions? Chi-Sq loses significance when one or more cells is empty, particularly with so few variables.

    If you post your data, I can look at it more closely and give you better feedback.

    0
    #201186

    MBBinWI
    Participant

    @astronaut71 – Don’t take this the wrong way, but are you sure you are up to doing this? I’m not sure you understand what you are undertaking. There are measurements to be taken, which will call for conducting a measurement system analysis to ensure they are adequate, as well as conducting the experiments and evaluating the data.

    What you show as the seven steps isn’t all that needs to be done. As you say, you have more than 3 inputs, so you may need/choose to conduct a screening design first to see if you can reduce these inputs. You may have strictly linear response, or you may have curvature, in which case you will need to choose the correct design type to ensure that the curve terms end up in the resulting model.

    I would encourage you to find a mentor who is familiar with conducting DOE’s and ask their help/guidance. Check if there’s a local tech school or university that offers Six Sigma courses. If you cannot find anyone local, then you should read about DOE’s. Come back if you need suggestions on what to read.

    0
    #201184

    MBBinWI
    Participant

    @dean6294 – that’s still a system based on estimation/prediction and not reality. Seems this could be done better. But then what do I know?

    0
    #201183

    MBBinWI
    Participant

    @pprendeville – have you tried searching the site?

    0
    #201182

    MBBinWI
    Participant

    @drheath03 – those “non-product” left overs SHOULD be counted negatively. The objective is to increase the productivity of the sheet of steel. If you could nest perfectly and have zero left over, that would be 100% productivity for the sheet. So what is not put into productive use needs to be counted against the process.

    0
    #201179

    MBBinWI
    Participant

    @dean6294 – I used to face the same thing as an Air Defense unit. Because we were usually tasked out to support a maneuver unit, I was rarely with the BN HQ.

    It would seem that the “address” should be the unit and not a specific geo-location. That way, as the unit moves, their “address” updates as well. With the advent of GPS it would seem this could be done all the way down to the individual vehicle level. Surprised it hasn’t been done already.

    0
    #201176

    MBBinWI
    Participant

    @drheath03 – generically, productivity is a measure of how much of an output you get per some amount of scarce input. Usually, that is time, but it doesn’t have to be. For example, in cutting parts from a sheet of steel, productivity can be how many parts you are able to get from a sheet. As you identify one of your items is nails. You could evaluate productivity as to how many acceptable products you created per qty of nails.

    Do you have a specific issue/question?

    0
    #201172

    MBBinWI
    Participant

    @cseider – future?

    0
    #201171

    MBBinWI
    Participant

    @lausto – I’m not sure of your question. Are you looking to evaluate whether the hole appears in a specific spot more than other locations? If so, then Chi-Sq would be one method.

    I think that you need to focus on the defect that leads to the hole. Since you likely can’t apply the SEM to every product and every location on each, you will want to determine what is causing the defect and take measures to eliminate that cause.

    0
    #201170

    MBBinWI
    Participant

    @jagadishMahamkali – while any good quality system will include a continuous improvement element, ISO9000 isn’t six sigma and six sigma isn’t ISO9000. This really isn’t the forum for ISO9000.

    0
    #201165

    MBBinWI
    Participant

    @cseider – stopped chewing gum when I kept falling down. ;-}

    0
    #201164

    MBBinWI
    Participant

    @cseider – you mean it’s not? I guess I’ve been doing it wrong all these years ;-}

    0
    #201162

    MBBinWI
    Participant

    Well, @cseider, the ideal ratio for me is 1:1. I can only work with one machine at a time.

    0
    #201161

    MBBinWI
    Participant

    @sarblakesl – Sarah, I’m going to ask that @mparet answer. She is our board Minitab contact.

    0
    #201149

    MBBinWI
    Participant

    @b1a5l9a2 – OK. We’re getting closer. Can you observe, or ask the workers, if there is adjustment going on to bring the value back to nominal? It looks to me that the lower side is happening randomly, but when the values get to the upper side, an adjustment is made to get the value back to target.

    If this is the case, then a fundamental premise is being violated in evaluating normality – that of outside adjustment of the data.

    Looking at your probability plot, we use something called the “fat pencil test.” Back when these graphs were created by hand, one would take the pencil used and lay it over the data. If the pencil covered the data points, you could be fairly confident of normality. Now, with statistical tests able to calculate probabilities, we tend to rely on them. However, the statistics are susceptible to individual points which can influence the statistics that visual examination would call “close enough.”

    As @rbutler states, the question as to normality depends on the use of the data. Many statistical tests are robust to non-normality, particularly when the data is similar to what you have presented.

    If I were mentoring you as one of my belts, I would have you check on the adjustment. If that’s happening, then I would go on and accept normality based on the histogram and prob plot. If not, then I would check on the sensitivity of the stat test that I’m looking to apply and see if it is robust to non-normality, and if so, then proceed. If it is sensitive to normality, then I would take some more data to ensure I have a full and complete picture. Even at 100 data points, you may have only captured one side of the distribution and over more time/data it may fill out.

    Hope this helps.

    0
    #201147

    MBBinWI
    Participant

    Also, if you are using Minitab, when you do the normality evaluation, do you get a plot like that below? If so, can you post that as well?

    0
    #201146

    MBBinWI
    Participant

    concur with @rbutler.

    0
    #201142

    MBBinWI
    Participant

    Can you post a picture of the histogram for the data?

    0
    #201140

    MBBinWI
    Participant

    Brandon: Not sure how you are getting the values that you do for 8 sigma.

    But you may have stumbled upon “the dirty little secret” of the origins of Six Sigma. If you look at a z-table (a table of the standard normal distribution), you will find 3.4 actually relates to 4.5 standard deviations. 6 Sigma actually relates to 0.987 per billion. So why were you taught that 6 sigma equates to 3.4 per million? You see, someone early on decided that in the “long term” there was something referred to as a 1.5 sigma shift. This was supposed to account for shift/drift that causes more variation over long periods than was observed during “short term” periods. Most all data gathered was considered “short term”, as any data set of a continuing process must necessarily not include all data, thus there is a “longer term” that exists. With this background, it was determined that any process that was at 6 sigma short term, would shift/drift by 1.5 sigma, and so in the short term would only be at 4.5 sigma (3.4 defects).

    You can search the site (and elsewhere) for the 1.5 sigma shift. There is quite a bit of discussion here and elsewhere regarding whether this truly exists or not. My perspective is that some amount of long-term shift/drift does occur, but that 1.5 sigma is not absolute. Thus, 3.4 dpmo for 6 sigma is fictitious.

    Hope this helps.

    0
    #201138

    MBBinWI
    Participant

    @kknvt91 – how about num of items in queue, completion time, effectiveness % after x days of implementation (how much of the projected savings are actually being saved after implementation has been stabilized), and at 1 yr post implementation how many of the “fixes” is still in place and effective?

    0
Viewing 100 posts - 1 through 100 (of 1,745 total)