Forum Replies Created
Forum Replies Created
-
AuthorPosts
-
December 1, 2020 at 7:56 pm #251150
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@rbutler – a function of the computer generation. If they had grown up with a slide rule, they would understand. ;-)
0December 1, 2020 at 7:46 pm #251149
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@rbutler – well said!
0November 26, 2020 at 10:04 am #251046
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Marc68 – I’m going to take exception to @cseider and @poetengineer in their guidance on this issue. When determining calibration frequency there are two items to evaluate – the bias and precision. While an MSA can help, it also introduces issues related to the human element, but that is not what calibration is about. Calibration is about the ability of the instrument itself (regardless of human interaction) being able to report a valid value and report with a level of precision required. In order to evaluate this, you should periodically use a known value specimen to evaluate your measurement device. Does it continue to report an accurate value to the level of precision required?
Typically, calibration is a function of mechanical processes. Wear and physical changes of the measurement mechanism. In systems where there is little of this phenomenon, the calibration period should be set appropriately. This is more a function of physics and mathematics than it is of risk (FMEA). The setting of risk level acceptable can help to establish the values to be used, but not the calibration frequency. That is a function of physics/math.
Just my humble opinion.
0November 26, 2020 at 9:48 am #251045
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@ahamumtaz – You should go back and research the definitions of Cp, Cpk and Sigma that were provided to you. You should find some verbiage that states something like: Cp is “potential” capability, whereas Cpk is “actual” or “observed” capability. The difference being the location of the distribution in relation to the target and upper/lower boundaries of acceptance. You cannot have a negative Cp, since that definition does not include the location issue (using absolute values). If you use Cp as your measure for Sigma, then you could have the scenario you describe – but this would be wrong.
0November 26, 2020 at 9:37 am #251044
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@derekkoz – as the learned @rbutler identifies, once you limit the number of decimal places you are going to use, you have created a discrete measure. The key is whether the number of decimals is sufficient to provide the resolution needed to answer the question being investigated. Now, don’t get me wrong, there are certainly absolute discrete measures, but even continuous measures are functionally discrete once you limit the number of decimals. The real question for you is to what level of precision do you need in order to answer the issues you are looking to understand.
I don’t know if there is any proof of this, but generally you should have one decimal greater than the number of significant places that you are looking to analyze. So, if you are trying to answer a question where the precision is to the 5th decimal place, and you are able to accurately and precisely measure to the 6th decimal, then you should be fine. If not, then you are going to have issues.
This level of precision is one that I have found in industry. At the outset of an improvement effort, a rather gross measurement scale is sufficient because the issues are rather large. As these are reduced, the measurements need to become more precise in order to discern the differences and make further improvements.
Good Luck.
0November 26, 2020 at 9:16 am #251043
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@geozor – A good approach would be to do a root cause analysis, identifying the several contributors to the issue (a fishbone diagram would be a good approach). Categorize the several contributors as to their likely impact. Then select the one that is the greatest contributor and go after improving it so that it either no longer is a contributor, or the contribution is lessened. Then attack the next greatest contributor. And so on.
0August 29, 2020 at 6:57 pm #249648
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@lucaspaesp – a lot of good discussion here, and I hope you’ve learned some things. Fundamentally, you have a measurement system that doesn’t have sufficient precision. There are several techniques that I might suggest, but the easiest is to use the raw data instead of sub-groups (as it seems you plotted in your graph #2).
0August 29, 2020 at 6:41 pm #249647
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Mike-Carnell – and don’t get me started on the whole “1.5 sigma shift” bullshizzle.
0August 29, 2020 at 6:39 pm #249646
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@cseider – so, you’re telling me it aint? Oh, I get it, +3 and -3 must be Zero sigma ;-}
0August 29, 2020 at 6:35 pm #249645
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@rbutler – Big Brother is always watching ;-}
0August 29, 2020 at 6:28 pm #249644
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@rbutler – and simple algebra at that ;-)
0August 29, 2020 at 6:27 pm #249643
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@irfan.ahanger01 – I’m certainly not going to disagree with @rbutler (good to see you’re still here, trying to educate the masses!), but I would take a more pragmatic approach. You already have a design with the center points, so you need to add the Star Points. For each factor, you will need two – a high and a low, set at an alpha level – to augment the data you already have. Each of the other factors should be set at their center point when paired with the Star Point factor. Five factors, times 2 new levels each, is an additional 10 runs.
If you set up a Central Composite Design in Minitab, they also up the overall number of true center points to six total for a total of 32 runs.
I’m just a neanderthal compared to @rbutler, but that’s how I would do it. Just my humble opinion.
0August 29, 2020 at 5:48 pm #249642
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@AR19CHU91 – you might get more of a response if you posit some of your own perspectives instead of just throwing something out and asking for comments. As I have said here many times (albeit not recently) – do your own homework.
0August 29, 2020 at 5:45 pm #249641
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Karthikdharmalingam – What is being identified is a measurement system error. In this situation, each internal audit sample that is taken and subsequently subject to the ATA, this can be used to identify a confidence interval on the data provided by the internal audits.
1August 29, 2020 at 5:36 pm #249640
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@AlonzoMosley – A couple of things come to mind. 1) If resources are constrained (and when aren’t they?), then the organization needs to prioritize the allocation of those scarce resources. If you have several processes that all could benefit from improvement, often your best allocation is towards the one most constrained – so long as you can actually sell the improvement (but a fundamental premise to being constrained is that you CAN sell the increased capacity, else it really isn’t constrained). 2) Another question that needs to be investigated is whether the scarce resources would be better applied to making other, non-constrained but lower return processes more efficient and therefore provide a better return.
Too many Lean/Six Sigma practitioners fail to examine these two points. They see defects or poor processes and immediately look to apply critical skills to problems that won’t improve output or really improve the bottom line. If you look at my two points above, item 1 deals with increasing the top line, and 2 deals with increasing the bottom line. One or the other should be the primary focus. I’ve seen too many instances when improvement was made that had negligible effect on either and the organization ends up getting a negative view of LSS.
If you haven’t read it, I would recommend “The Goal” by Eli Goldratt to get a better understanding of how constraints should be analyzed. There is a new(er) perspective of first using constraints evaluation to identify WHAT needs to be addressed, and then LSS or other improvement methods are properly applied to make the improvement needed.
At least, in my humble opinion, that’s how I would react to the situation you posited.
0January 29, 2020 at 8:41 pm #245863
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Andy-Parr – Heck, I’d PAY to have that certification myself ;-}
0January 27, 2020 at 9:19 pm #245813
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Andy-Parr, @Mike-Carnell – good to see you both are still here. Funny how some people latch on to something and their perspective totally vanishes. Oh, well. I tried. Must be getting soft in my old age – I didn’t even look to take out the flame-thrower. Best to you both.
1January 24, 2020 at 10:56 pm #245748
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@stephanieareid – you really are a one-trick-pony, aren’t you?
Each methodology has its utility and none are the be-all and end-all.
For others reading this (old) thread, here is some nugget of learning I have managed to dig up over the years. Different problems require different approaches. Lean, as our friend stephanie seems so very fond of, is good for focusing on elements of a system that are inefficient or causing waste. Six sigma (the DMAIC kind), which Stephanie seems to like to disparage, is very good at attacking one form of waste – defects or mistakes. One of the failings of Lean and Six Sigma is that they don’t inherently point you to the best place to apply limited resources.
Now, if you had unlimited resources, you have no problem. Just use the tools and people liberally to attack everything. Since that isn’t the reality for most organizations, you need to prioritize what to deal with first. This is where TOC comes in very handy. You see, not all process improvements actually provide improvement that the organization needs to realize benefits. Yes, you can SMED the heck out of the operation, but if you didn’t SMED the process that is your bottle-neck, you are unlikely to realize much in the way of actual savings (and will have pissed many people off in doing a bunch of work that doesn’t really seem to make any important impact).
And, @cseider, I would propose to you that Drum-Buffer-Rope is a process improvement method – it is the one that aligns the cadence of the operation and helps to identify when problems arise that need attention.
But that’s just my humble opinion.
1January 24, 2020 at 10:38 pm #245747
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Tipman – I’m going to disagree with @straydog that creative people are likely to resist. My experience is that they need to be shown how various tools help them to do what they inherently know they need to do.
I’m not sure just what you are having trouble with. Is it identifying new items to pursue (you will need tools that foster innovation)? Is it identifying acceptance limits for items that you have created that will satisfy customers (QFD, benchmarking, customer trial are good techniques)? Or is it making an item insensitive to manufacturing, customer use, abuse, environmental conditions (Robustness will be the approach you want to apply)?
These are areas which are covered by DfSS. I would not try this on your own unless you are willing to do a lot of study and practice many iterations. The toolkit which is DfSS is extensive and many of the tools and methods take several cycles of learning to become competent at applying.
There are several good books on DfSS – but my favorite is by Skip (Clyde) Creveling.
If you are really looking to pursue, find a good guide (consultant) who can guide you along the journey – and it will be a journey of several years, so don’t expect to flip a switch and voila, things are perfect.
Good luck.
1January 24, 2020 at 10:16 pm #245746
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.January 24, 2020 at 10:08 pm #245745
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@stephanieareid – you mis-interpret my comment. I don’t doubt that they work together – I was merely inquiring how the original poster thought they work together. Too many posters here look for others to do their homework (due diligence that they could figure out themselves if they just did some basic work).
My years of experience show me that they do work together. One of the wastes that Lean seeks to reduce is defects – which are often caused by being out of acceptable tolerance. That is where six sigma can be applied.
You sound like many zealots who latch on to their favorite approach/methodology/tool which becomes their hammer and every problem is a nail (the same can be said for many six sigma practitioners, by the way). I would encourage you to critically evaluate the problem that you are looking to address and apply the correct tool. By the way, Lean and Six Sigma are merely a couple of them. You should keep learning new ways to solve problems – or better yet, create a new one where no existing ones seem to be adequate.
Good luck. And open up your mind to other approaches. It will make you a better problem solver.
0August 6, 2019 at 9:15 am #240851
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@andy-parr, @mike-carnell: Sorry, it’s been a while. Very busy dealing with my own brood. Took on an NPD Dir role a couple of years back with a group of younger engineers who have little background/understanding of DfSS. So, I’ve been busy teaching them and they have been teaching me about commercial ovens and cooking equipment.
Best to you both, and to my friends at iSS.
1December 28, 2018 at 10:35 am #210010
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@michaelcyger – not sure exactly what the left side icons are supposed to be doing for me. I have numbers next to the bell, the envelope, and the group of people. When I click on the envelope and group of people, it doesn’t take me anywhere. btw – using IE11 on Win 7 Pro.
0December 28, 2018 at 10:30 am #210009
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@michaelcyger: Congrats. Looks nice. I agree performance is much better. Also agree with @rbutler that the forum listing missing is a real drag.
@cseider: If you just type the at sign and the first few letters of the handle you are looking to tag, a dialog pops up to select from (even has their avatar/pic if they have one – very helpful).0December 28, 2018 at 10:25 am #210008
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@felixveroya: After 3 years, how have things been going? You didn’t mention what type of engineers you were looking to motivate. I assume manufacturing engineers. I have found that reward programs usually bring about cycles of fixing the same problem (it is easy to keep “fixing” the same problem instead of rooting out the cause to begin with). Manufacturing engineers typically want/need to solve issues for good as a fundamental part of their job. Recognize them for their achievements in your performance management program not through a financial incentive program. Just my humble opinion.
0December 28, 2018 at 10:19 am #210007
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Mike Carnell: Good advice.
@Jessica: As Mike cautions, be wary of someone who is a religious devotee of a specific methodology. I like to see if the person is flexible enough to seek out new/different tools to help them solve their issues. For example, many BBs slavishly look to the lowest Cpk value or highest DPMO levels. Having studied Theory of Constraints (TOC) I learned long ago that there is one item in the system that is currently holding it back from better performance. Find that singular issue and improve it (using Lean, SS, SMED, whatever it calls for to bring about improvement). Once you have improved that, look for the next constraint and move to that. A BB who talks like that is one you should not let get away. Good luck.1December 28, 2018 at 10:07 am #210006
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.Good points all. I agree that RTY is probably the best way to evaluate your overall quality. You apply DMAIC continuous improvement methods to improve process capabilities until the costs of better improvement outweigh the benefits. Then (or before, depending on the focus of the organization) you should apply DfSS to change the underlying design of the system to one less sensitive to the variations that exist.
To go back to the original question – it really doesn’t matter the target level you choose, rather that you are measuring it. If you are measuring honestly, you can then have discussions with your customer and management on whether you are currently good enough (you aren’t). But more importantly, by measuring, you will see where you can improve and begin to take steps to get better in that area. You should never be satisfied with a level – there will always be defects that can be eliminated. You should be striving for a method to identify and help prioritize where to go after improvement as that will bring you the biggest bang for your investment. Just my humble opinion.0December 28, 2018 at 9:48 am #210005
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.All good points. For something like this, I would choose a spider chart. Each of your main categories becomes a spoke from a central point. You plot the score from each along this spoke and then connect the lines. This helps to identify strength areas and weak areas. You can do the same for the details within each category. This way, you get the overall perspective, and have more detailed info to dive into for improvement.
0December 22, 2017 at 1:28 pm #202082
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@andycroniser – why don’t you provide your explanation and we can give you feedback?
0December 9, 2017 at 1:51 pm #202053
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@cseider – been quite busy, so haven’t had much chance to check out the board. Hope things are well with you.
0August 4, 2017 at 4:49 pm #201717
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@cseider – please check your private messages. Any interest?
0June 18, 2017 at 6:36 am #201573
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@rubennicolas – depends on what you consider a “project.” At this point, you have already identified the issues, so Define isn’t needed. But essentially, yes, you attack each issue in order of potential results to be obtained. The biggest defect creator might not offer the best chance of reducing defects, so evaluate those you have identified for results of improvement and difficulty to improve. Choose the highest results with the lowest difficulty and proceed in order. Good luck.
0June 17, 2017 at 6:41 pm #201570
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@rubennicolas – also, in the last 4 weeks you seem to have an inverse relationship between input and yield. Perhaps at lower inputs the process is better able to convert, so as inputs go up the conversion (yield) goes down. Food for thought.
0June 17, 2017 at 6:38 pm #201569
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@rubennicolas – after only 1 bad week you are considering that your results have stabilized at 93%? I see 3 weeks prior that are above 96%. I don’t think that your process has stabilized yet. You may have had some of the improvements slide back to the old method. I think you really need to establish some controls and ensure that they are effective. Review your actions, ensure that they are still in place, monitor the process results while ensuring these are still in place. You may find that you have actually met your improvement goal but those who are carrying out the actions are reverting to old behaviors. Good luck.
0June 17, 2017 at 6:30 pm #201568
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@dando – not sure exactly what you are trying to evaluate here. Typically, interactions are caused by the inputs, not a result of the outputs.
In trying to understand your query, I took your data and did some sample graphical analysis. I took what you identified as the X value (input) and some of the Y (outputs) and graphed them. I used two Y’s in a scatterplot and used the X value as a group variable. Two resultant graphs are attached. You will see that the Y’s a and b with the X input seem to change based on whether X is high or low. This would indicate that X high causes a different response in outputs a and b than it does when low.
In the second graph, I used a and d. Here the response is nearly parallel, with the average response being lower but with a similar slope for X being low.
However, your data has significantly more High values than Low, so this might be a matter of data overload of the high values.
Not sure if this helps or not.
0June 10, 2017 at 6:51 pm #201552
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@djnrempel – as @straydog mentions, a control chart is used to evaluate stability. While you might get some use out of a control chart to evaluate changes caused by various programs, it would not be my first choice. I would instead be looking at an hypothesis test such as a two-sample t test, or perhaps more appropriately if you are applying a program change to the entire group, the paired t test.
Since you are new to the statistics, I would suggest finding a mentor – you might check out a community college that teaches six sigma – to pose these questions where you can have a longer and more meaningful discussion.
Good luck.
0June 2, 2017 at 7:37 pm #201533
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.June 1, 2017 at 7:58 pm #201527
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@rbutler – very interesting. When I teach (taught, as I hope not to have to teach this again, being back in the practitioner phase) about the p-value, I always emphasize that statistics is shades of grey, but when selecting a p-value, you are selecting between black and white. You must be willing to identify a level of significance that if it is met by a shade over you will accept it, and a shade under you are willing to reject it. You cannot equivocate. Probabilities are not absolute. But p-values must be.
I find your compilation of those who are unable to discern this reality very funny.
0May 29, 2017 at 6:07 am #201511
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@mike-carnell – I’ll take your opinion over most people’s “facts.” Have a great Memorial Day!
0May 28, 2017 at 10:43 am #201506
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@mike-carnell – sometimes people need to hear the truth. And sometimes that truth hurts.
0May 28, 2017 at 10:37 am #201505
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.Billy – I won’t speak for my friend @mike-carnell, but I’ve never taken a certification at face value. Even from the most respected certifiers, there is considerable variation. Those who rely only on the certifying body usually don’t know what they are looking for nor how to evaluate those they are contacting. Thus, they are attempting to use a third party to do that work for them.
I, for one, am self taught. My “certification” was from a test that I wrote (I was already the MBB in residence) and the evaluation committee were the belts that I had taught. I have done alright without some “name brand” certification. But you need to do what you feel is right for you.
0May 25, 2017 at 7:40 pm #201477
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@tgroves-EXTA – what you seem to be describing is a rolling average reported weekly of data from the past two weeks. This is a reasonable method of damping out noise/variation that occurs at a periodicity larger than the reporting period. It is not an average of an average. Your approach is well within acceptable practice.
0May 25, 2017 at 7:30 pm #201476
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@mike-carnell – I profess, I cannot remember the instance, but am sure it was well deserved ;-}
0May 25, 2017 at 7:29 pm #201475
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@cseider @mike-carnell – it is a weekend to remember that our ability to pontificate without repercussion should not be taken for granted. So many had that ability cut short so that the rest of us could retain it. To the veterans out there – Thank You. To those who are the family members of those who gave all, our profound sympathy for your loss that allows the rest of us to remain free.
0May 25, 2017 at 7:19 pm #201474
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@cseider – are you disparaging my current home? Having grown up in Minnesota, the winter here is actually mild ;-}
0May 25, 2017 at 7:16 pm #201473
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Straydog – good point. I’ve interviewed some that had “certification”, even ASQ, but couldn’t reason through one of my favorite BB questions – what do you do when you have a GR&R of 70% and a process with a Cpk of 1.7? They have been trained to react to certain values, not think and understand what the data is telling them.
0May 24, 2017 at 8:51 pm #201457
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@cseider – I don’t usually rant here, saving that for my better half. Living by the adage that brevity is the soul of whit, I try to maintain a minimalist presence. Although certain topics can get me to pontificating vociferously.
0May 24, 2017 at 8:47 pm #201456
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@BuckeyeGuy92 – as @straydog identifies, what you are getting are specifications, not needs or requirements. This is the outcome of HOQ 1, not the starting point. You might consider these as the inputs to HOQ 2 and proceed to break these down into the specific design requirements.
0May 24, 2017 at 8:42 pm #201455
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@tgroves-EXTA – so, what you’re dealing with is a system where you want to report weekly, but the system performance is on a longer time period, perhaps 2 wks in periodicity?
0May 22, 2017 at 6:02 pm #201442
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Mike-Carnell – hi, back. Russia, I’m jealous. One place I’ve wanted to visit but haven’t had the opportunity. Work is good. Lots of challenges, but a team willing to learn.
0May 22, 2017 at 5:46 pm #201441
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Mike-Carnell – one needs a good rant once in a while.
0May 22, 2017 at 5:28 pm #201438
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Mike-Carnell – been a busy afternoon. Good to see your comments.
0May 19, 2017 at 5:17 pm #201421
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@TollemG – maybe concentrate more on selling instead of trying to predict sales?
0May 17, 2017 at 8:26 pm #201416
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@lefthooklacey – it is sad that your company was taken in by the worst of our industry. Since you already have some idea of what six sigma is all about, I would recommend that you embark on a course of self-study. I would start by reading Deming and Juran. Then for specific tools I would check out Pyzdek and Bothe. If you are a Minitab user for your statistical analysis, I would get the manual “Lean six sigma and Minitab” by Opex. I would also recommend that you hook up with a mentor. You might find one who is teaching at a community college or through ASQ.
Good luck.
0May 17, 2017 at 8:11 pm #201415
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Snandy – why should we do your homework for you? Why don’t you propose an answer and your reasons why it is correct and we will tell you where you are right/wrong.
0May 16, 2017 at 7:47 pm #201411
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.You can also use a distribution fitting tool in CrystalBall.
0May 16, 2017 at 7:45 pm #201410
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.May 16, 2017 at 7:37 pm #201408
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@johnpeters123 – if possible, query your questioner about why they think their answer is correct, and please post back here. It would be interesting to know their rationale. I have an hypothesis that the preparers of these tests are becoming less and less knowledgeable about the subject matter.
0May 16, 2017 at 7:34 pm #201407
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@rbutler – or the preparer of the question isn’t all that knowledgeable.
0May 15, 2017 at 7:58 pm #201382
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Straydog – Is there really anything “new?”
0May 15, 2017 at 7:51 pm #201381
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@kaac87 – and from where did you receive your “certification?” You might want to ask for your money back.
0May 15, 2017 at 7:50 pm #201380
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@johnpeters123 – none of these statements “best describes Lean.” Lean is about eliminating waste. While producing at the rate of customer demand is called Takt time, which is a component of a lean production system, it isn’t specifically Lean. Likewise, reducing variation is a fundamental aspect of Six Sigma, but doesn’t necessarily mean that the reduction in variation is actually eliminating waste. Only if the variation was outside of acceptable levels would this reduction be eliminating waste. If the amount of variation is acceptable to the customer, the additional costs included to reduce that variation could be considered waste of its own kind.
This is a very poorly crafted question. Good luck in explaining that to the instructor.
0May 7, 2017 at 6:08 am #201309
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@DeanOK1969 – as @straydog identifies, Spec Limits have nothing to do with control limits. Your spec is what the customer will find acceptable. Thus, why would your customer not be more satisfied with outcomes above 90%? There should be no upper limit on the spec in this case. The data itself will determine your control limits, you don’t “set” them.
I think a more salient question should be whether OEE is the proper metric. Is it sufficiently forward looking to be a good metric? Taking measurements on a mechanical/production process is typically good practice as the physics are constant and the system is stable. In the case of an airport, are the components of OEE constant and predictable? I think not. You have weather that can be highly variable and unpredictable, and airlines that have varying policies and standards that are also not standardized and uniform. I think you need different metrics. Just my humble opinion.
0May 7, 2017 at 5:59 am #201308
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Straydog – if the physical transformation requiring the time is unacceptable, that identifies a need to search for a different technology or different way to satisfy the requirements. This may exist, or may not, in which case a research project may be needed.
0May 7, 2017 at 5:55 am #201307
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@8sigma – depending on where your data lies compared to the boundary condition (in this case the lower bound being zero), you may or may not have a distribution that is reasonably normal. If very close to the boundary (as yours appears to be), you at the very least have a clipped distribution (this would occur when you have an actual normal distribution, but some portion – that under the LSL for example – is excluded from the data set), or more likely you have a non-normal distribution. Time and physical boundaries can exist where zero is an absolute. If your data always resides sufficiently far away from the boundary (at least three std dev or more) then you should not run into a boundary issue with a control chart. If closer, then you can run into problems. And you always want to check for normality and evaluate for special cause issues.
0May 5, 2017 at 5:48 pm #201303
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@cseider – precisely correct, my friend.
0April 29, 2017 at 1:59 pm #201280
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@sambit.ximb – sorry, no attached file.
0April 29, 2017 at 1:57 pm #201279
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@YNot – Ideally what you would want to do is construct samples that have known and distinct defects – in this case reduced signal output at specific levels. You still want to have the operators in the system as they may hook up the boards differently, with different connection seating for example. Your MSA isn’t specifically looking at the correct I/O signal, but whether degradations of the signals are correctly interpreted by the test apparatus.
0April 27, 2017 at 5:34 pm #201268
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.Theoretically, yes. However, if they were truly not working and waiting on answers, then this is not only inefficient, but wasteful. At the very least, they should be doing some training or other useful activity during such periods.
0April 26, 2017 at 5:33 pm #201265
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.Ramesh: You would normally perform an FMEA on a process not on the entity as a whole.
0April 24, 2017 at 4:56 pm #201253
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@JamesM – I agree with @rbutler that a graphical answer results in a negative number (see attached graph). However, solving the regression equation with Dia of 6.96 gives a result well below the graphical answer. I’m going to ask @mparet to chime in, as this is a Minitab question.
0April 23, 2017 at 7:20 am #201241
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Pravin25 – you first should understand the cause of the spikes. Are these due to special causes, and thus are another process, or are they built into the existing process. Do these spikes, for example, coincide with a time of month, shift, weather phenomenon? Or is it a situation where some number build up, and there is a push on to reduce, so either more resources are applied, or the quality level is relaxed to push through the increase so as to reduce the level to “normal?”
0April 22, 2017 at 7:43 am #201239
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@veejayshan – I can’t share an example, but can suggest some actions that you might want to take.
– Have you benchmarked the various projects and methods that were used to develop the capital estimates?
– What variability is there for the same individual/group in creating the capital estimates? Are there some individuals/groups that are consistently more accurate and conversely less accurate than the mean?
– Are there characteristics of the projects that can be identified that have higher/lower variability? Perhaps those that have significant construction costs are always more variable, or those that use union labor are more variable, or …
– One tool that you might want to investigate is Monte Carlo simulation. This method can help you to apply variability to the inputs and see the impact on the outputs. A good tool will provide you insight into the variables that are most sensitive to variation so that you can focus on those to get better data. I once used this method to identify a variable external to those directly controlled by the development team that was going to have a huge impact on the outputs. Justified spending quite a bit of money on understanding this external variable so as to ensure the project benefits were more accurately understood.
Hope this helps.
0April 22, 2017 at 7:34 am #201238
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Vidhya30 – I’m not sure exactly what you are asking, but here are some thoughts. Just for reference, I’ve spent the past 8 yrs or so working primarily with large and small food processors – everything from cheese blocks and shredded cheese, to single serve coffee products, individual drinks, to cold cuts and smoked meats. I’ve also worked in industries making large capital equipment, automotive components, and commercial food service products. So, I have a perspective within and outside of the area which seems to be of interest to you.
Let’s look at the overall issue of Lean and Six Sigma. Lean provides a perspective of eliminating wastes. These wastes, as identified in most Lean teaching, fall into 8 different categories. While Six Sigma can be applied to various effect in each of these areas, it is most directly applicable to situations where the variability of the outputs is larger than what the customers will accept and so variation reduction will reduce the waste of defective product. Lean also tends to have tools and methods that are easier to train and widely deploy, which makes the ability to impact the organization with smaller but more widely dispersed actions easier. Whereas, Six Sigma tends to have a more concentrated group of practitioners because the statistical tools require a higher skill level to master.
Neither is the silver bullet (regardless of what some consulting firms may want to portray), rather you must apply the right tool to the problem at hand. Neither are Lean or Six Sigma the only methods that might apply. For example, neither really has a good method of identifying specifically where to apply limited resources to achieve the most impactful results. For that, I apply concepts from Theory of Constraints, where identifying the choke point (constraint) and improving the throughput for said constraint improves the overall system.
Long story short, there is no specific methodology that serves all needs optimally. You must become adept at many various methods and learn which situations are most applicable to which problem solving method. That said, Lean is easy to learn, easy to widely deploy, and provides that ability to accrue savings across a wide swath of processes, so typically has a very good ROI.
Hope this helps.
0April 20, 2017 at 4:57 pm #201228
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Mike-Carnell – correct. While my response might have seemed flippant (and quite frankly, it was), as @katiebarry identified, there is no single tool. You must identify the problem, and then select the appropriate tool. None of them address every problem. I could have just as correctly responded “all of them.” As you coach, @ravikumar0423, you must learn to identify the problem and use your own abilities to search for an appropriate tool. That means doing some basic research, and Bing (or your preferred search tool) is likely your best first resource.
0April 19, 2017 at 5:55 pm #201221
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@ravikumar0423 – none of them.
0April 18, 2017 at 5:08 pm #201211
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@Mike-Carnell – good to see you back. Hope all is well with you.
0April 17, 2017 at 4:59 pm #201202
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.Look at it this way – if you don’t apply CI, your managed services firm is going to continue to charge you the same year after year. You will not eliminate waste, nor streamline processes so that their services being provided are more effective/efficient. While you will be contracting for a service, you will be contracting for more effort (and believe me, they will bill you appropriately) than is necessary.
0April 16, 2017 at 5:36 am #201195
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.Peter: It looks like you are learning very basic two-level factorial design of experiments. Search this site (and the web for that matter) on those terms. I’m sure you’ll find plenty of information that will help you learn. If you still have questions, then come back here with specifics and we can clarify.
0April 16, 2017 at 5:32 am #201194
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@aabousalem – do you have any statistical software tools available? For example, in Minitab v17, you just use the Stat/Power and Sample Size… and then choose the type of statistical test you are interested in. There is also the ability to get min sample sizes for acceptance sampling plans under the Quality Tools/Acceptance Sampling for… menu.
0April 13, 2017 at 5:20 am #201187
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@lausto – From your description, I would imagine that your table was 2×2, defect/no defect on one axis, and hole/no hole on the other. Observed counts should be in each of the 4 cells. If this is what was evaluated, then your analysis was set up the way that I would have set it up.
Are you evaluating all the components, or sampling? If sampling, are you sure it is a random sample?
Do you have data in each of the 4 cell positions? Chi-Sq loses significance when one or more cells is empty, particularly with so few variables.
If you post your data, I can look at it more closely and give you better feedback.
0April 13, 2017 at 5:09 am #201186
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@astronaut71 – Don’t take this the wrong way, but are you sure you are up to doing this? I’m not sure you understand what you are undertaking. There are measurements to be taken, which will call for conducting a measurement system analysis to ensure they are adequate, as well as conducting the experiments and evaluating the data.
What you show as the seven steps isn’t all that needs to be done. As you say, you have more than 3 inputs, so you may need/choose to conduct a screening design first to see if you can reduce these inputs. You may have strictly linear response, or you may have curvature, in which case you will need to choose the correct design type to ensure that the curve terms end up in the resulting model.
I would encourage you to find a mentor who is familiar with conducting DOE’s and ask their help/guidance. Check if there’s a local tech school or university that offers Six Sigma courses. If you cannot find anyone local, then you should read about DOE’s. Come back if you need suggestions on what to read.
0April 12, 2017 at 6:13 pm #201184
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@dean6294 – that’s still a system based on estimation/prediction and not reality. Seems this could be done better. But then what do I know?
0April 12, 2017 at 6:11 pm #201183
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@pprendeville – have you tried searching the site?
0April 12, 2017 at 6:10 pm #201182
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@drheath03 – those “non-product” left overs SHOULD be counted negatively. The objective is to increase the productivity of the sheet of steel. If you could nest perfectly and have zero left over, that would be 100% productivity for the sheet. So what is not put into productive use needs to be counted against the process.
0April 12, 2017 at 8:25 am #201179
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@dean6294 – I used to face the same thing as an Air Defense unit. Because we were usually tasked out to support a maneuver unit, I was rarely with the BN HQ.
It would seem that the “address” should be the unit and not a specific geo-location. That way, as the unit moves, their “address” updates as well. With the advent of GPS it would seem this could be done all the way down to the individual vehicle level. Surprised it hasn’t been done already.
0April 11, 2017 at 3:12 pm #201176
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@drheath03 – generically, productivity is a measure of how much of an output you get per some amount of scarce input. Usually, that is time, but it doesn’t have to be. For example, in cutting parts from a sheet of steel, productivity can be how many parts you are able to get from a sheet. As you identify one of your items is nails. You could evaluate productivity as to how many acceptable products you created per qty of nails.
Do you have a specific issue/question?
0April 11, 2017 at 4:53 am #201172
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@cseider – future?
0April 11, 2017 at 4:46 am #201171
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@lausto – I’m not sure of your question. Are you looking to evaluate whether the hole appears in a specific spot more than other locations? If so, then Chi-Sq would be one method.
I think that you need to focus on the defect that leads to the hole. Since you likely can’t apply the SEM to every product and every location on each, you will want to determine what is causing the defect and take measures to eliminate that cause.
0April 11, 2017 at 4:38 am #201170
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@jagadishMahamkali – while any good quality system will include a continuous improvement element, ISO9000 isn’t six sigma and six sigma isn’t ISO9000. This really isn’t the forum for ISO9000.
0April 10, 2017 at 12:31 pm #201165
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@cseider – stopped chewing gum when I kept falling down. ;-}
0April 10, 2017 at 12:23 pm #201164
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@cseider – you mean it’s not? I guess I’ve been doing it wrong all these years ;-}
0April 10, 2017 at 12:17 pm #201162
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.Well, @cseider, the ideal ratio for me is 1:1. I can only work with one machine at a time.
0April 10, 2017 at 12:14 pm #201161
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@sarblakesl – Sarah, I’m going to ask that @mparet answer. She is our board Minitab contact.
0April 9, 2017 at 1:37 pm #201149
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@b1a5l9a2 – OK. We’re getting closer. Can you observe, or ask the workers, if there is adjustment going on to bring the value back to nominal? It looks to me that the lower side is happening randomly, but when the values get to the upper side, an adjustment is made to get the value back to target.
If this is the case, then a fundamental premise is being violated in evaluating normality – that of outside adjustment of the data.
Looking at your probability plot, we use something called the “fat pencil test.” Back when these graphs were created by hand, one would take the pencil used and lay it over the data. If the pencil covered the data points, you could be fairly confident of normality. Now, with statistical tests able to calculate probabilities, we tend to rely on them. However, the statistics are susceptible to individual points which can influence the statistics that visual examination would call “close enough.”
As @rbutler states, the question as to normality depends on the use of the data. Many statistical tests are robust to non-normality, particularly when the data is similar to what you have presented.
If I were mentoring you as one of my belts, I would have you check on the adjustment. If that’s happening, then I would go on and accept normality based on the histogram and prob plot. If not, then I would check on the sensitivity of the stat test that I’m looking to apply and see if it is robust to non-normality, and if so, then proceed. If it is sensitive to normality, then I would take some more data to ensure I have a full and complete picture. Even at 100 data points, you may have only captured one side of the distribution and over more time/data it may fill out.
Hope this helps.
0April 9, 2017 at 7:06 am #201147
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.Also, if you are using Minitab, when you do the normality evaluation, do you get a plot like that below? If so, can you post that as well?
0April 9, 2017 at 7:03 am #201146
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.concur with @rbutler.
0April 8, 2017 at 6:03 am #201142
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.Can you post a picture of the histogram for the data?
0April 7, 2017 at 7:49 pm #201140
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.Brandon: Not sure how you are getting the values that you do for 8 sigma.
But you may have stumbled upon “the dirty little secret” of the origins of Six Sigma. If you look at a z-table (a table of the standard normal distribution), you will find 3.4 actually relates to 4.5 standard deviations. 6 Sigma actually relates to 0.987 per billion. So why were you taught that 6 sigma equates to 3.4 per million? You see, someone early on decided that in the “long term” there was something referred to as a 1.5 sigma shift. This was supposed to account for shift/drift that causes more variation over long periods than was observed during “short term” periods. Most all data gathered was considered “short term”, as any data set of a continuing process must necessarily not include all data, thus there is a “longer term” that exists. With this background, it was determined that any process that was at 6 sigma short term, would shift/drift by 1.5 sigma, and so in the short term would only be at 4.5 sigma (3.4 defects).
You can search the site (and elsewhere) for the 1.5 sigma shift. There is quite a bit of discussion here and elsewhere regarding whether this truly exists or not. My perspective is that some amount of long-term shift/drift does occur, but that 1.5 sigma is not absolute. Thus, 3.4 dpmo for 6 sigma is fictitious.
Hope this helps.
0April 7, 2017 at 7:57 am #201138
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@kknvt91 – how about num of items in queue, completion time, effectiveness % after x days of implementation (how much of the projected savings are actually being saved after implementation has been stabilized), and at 1 yr post implementation how many of the “fixes” is still in place and effective?
0 -
AuthorPosts