Forum Replies Created
Forum Replies Created
February 8, 2010 at 1:14 pm #189167
Thanks to all of you for the responses. The pain here that my sponsor has brought to me is the fact that the inconsistencies in our Work Order invoicing practices makes comparisons across all stores difficult and limits our ability to identify best practices and improvement opportunities. I am talking with my MBB today to discuss how to get the scope of this project in line so that we are measuring important data to help with this pain. For example things such as travel time, mileage charges, and other charges to the customer are getting written off in order to secure the sale. These items are getting written off into different accounts depending on the store. Some stores are writting off to “Competitive Allowance” and others are not. This is making it difficult to compare store to store if the same write-off is getting charged to different accounts. Hopefully this is making sense, I feel like I am rambling. Hopefully I will find some more direction today when I meet with my MBB.0December 21, 2009 at 4:52 pm #187679
I agree that you have to be cautious looking at the coordinates compared to an inflated tolerance can give a false sense of security on the measurement system. But you cannot only look at the system to itself (% Study in Minitab terms) because that does not give you an understanding if the measurement system is adequate to meet your needs. It is possible to have a system that shows a low %Study result, but when compared to the tolerance it isn’t acceptable. %Tolerance will allow you to understand if the system can meet the requirements and %Process will allow to see if your system is capable enough to help control your process.
I think looking at the coordinates is the right way to go, but the question is what is used to construct a “window” to be able to determine if the amount of variation is acceptable or not.
I hadn’t thought of polar coordinates, but will look into that idea more, too.
I do appreciate the feedback and ideas!!0December 18, 2009 at 8:16 pm #187651
That is a great idea and I have tossed that approach around.
Getting data to analyze isn’t the issue. The question with that approach is what tolerance is used compare the X & Y coordinate to?? If there is a true position callout of 0.010, what does that translate to in terms of a X & Y coordinate tolerance?0December 18, 2009 at 6:43 pm #187649
We are using a customized operator assisted camera system. Basically something similar to an optical comparator. Not sure how the method comes into play here, but I am interested to hear your ideas!!0October 13, 2009 at 1:50 pm #186065
I too need a good 5s audit form for the Mfg floor.
thanks0April 29, 2009 at 3:16 am #183762
I’ve been using catapults since the 80’s, and I purchased several from a man (wood worker guy actually) while I was at Texas Instruments and BAE Systems in Austin. The picture of the catapult you posted is THE catapults he built in Austin. I will look for his address, which I hope to have at work.Oh, and there is a BIG difference in catapults, at least as far as I’m concerned. The pictured one lets you adjust stop angle, start angle, band length, and arm length. Most of the other catapults I’ve seen only allow the adjustments in a few increments (discrete). This one allows continuous adjustments.0February 25, 2009 at 8:23 pm #181700
HF Chris Vallee,
I agree with you 100%. Stopping at a containment action (unless the containment and permanent corrective actions are the same) does not prevent the root cause from re-occurring. Yet, both containment and permanent corrective actions are needed. The band-aide stops the boat from sinking; giving time to implement a permanent corrective action. The containment action is particularly important to stop defects from leaking out to the customer either external or internal customers.
Just to keep a bit of continuity between discussions: I made the assumption that the root cause for the wrong upper test limit would be found and fixed. Given that assumption, the question that I then answered was the following: Having resolved the root cause of the wrong upper test limit, a necessary step, is the resolution of that root cause sufficient to stop looking for new root causes.
One More Point: The results for a root cause search are presented in an 8D reports. Within that report, I see two other steps that are often dropped: First is the confirmation of the solution once it is fully implemented into production. I have seen software, permanent corrective actions, that appeared to work fine but after full implementation a bug was found. Also, the preventive action step is often dropped. What a waste for some one else to solve the same problem on a similar product 6 months later.
My thanks for making me think deeper.
0February 24, 2009 at 6:03 pm #181646
HF Chris VAllee,
Customer Issue: A permanent corrective action was not provided.
Comments: You make an excellent point. I provided the containment action but not a permanent corrective action — for example either a sign-off procedure by another individual to validate the entry or some other procedure is needed to prevent the error from happening again.
On the other hand, my purpose was to illustrate when you stop looking for more root causes. Notice that I stopped looking with a Cpk =1.0. I am sure that you agree when I say, a process with a Cpk = 1.0 is not a very good process.
0February 22, 2009 at 2:48 pm #181562
Customer issue: Establishing Values for Factors in a DOE
Issue with the Problem Statement: You should have included the units of pressure. Mm-Hg, atm, lb/in*in etc.
DOE Factor: I believe that both Jane and Robert covered many key issues that you need to be aware of. Here are three more thoughts:
(1) Why widen the pressure window? Perhaps the original conditions are looking at a local maximum or minimum, and not the global.
(2) include a center point for pressure
(3) One DOE does not optimize a process. Suppose you are looking at yield from a chemical reaction. The interaction between temperature and pressure maybe more important than pressure by itself.
0February 22, 2009 at 1:32 pm #181560
Customer issue: When is a root cause a real root cause and do I stop finding root causes
Conclusion: Keeping identifying root causes and removing the causes until the project objective has been meet. If you decide to make further improvements beyond the project goal, create a new six sigma project.
Set Six Sigma Project Objective: Suppose a test of a specific CTQ output on an electronic devices has a Cpk = 0.5. Your six sigma project goal is Cpk =>1.0.
Root Cause Identified: Your team identify a root cause. The upper test limit, (UTL) was entered incorrectly into the automated test system. You make the correction, and you measure Cpk. With the new UTL the Cpk = 0.7.
Meeting the Project Objective: The project objective has not been met. Clearly, there is another root cause causing such a low Cpk value. Since the project goals have not been met you need to find another root cause, and make the needed correction.
Stop: When the project goals have been met the project is completed.0February 21, 2009 at 9:36 pm #181554
Customer Issue: Need to achieve 25 ppm defect rate or lot will be rejected
Terminology: The words specifications and test limits are two different things. In my work, the customer establishes the specifications. From the specifications test limits are established. The test limits are smaller than the specification limits. For example
Specifications = 10 cm +/- 0.5cm à This implies that 25 ppm cant be improved unless the customer agrees to widen the specification limits.
Test limits = 10 cm +/- 0.4 cm. à This requires an evaluation of risk to widen the test limits.
Purpose of Test Limits: The purpose of the test limits is to guarantee (used loosely) that the out-going product meets the customers specifications.
What Determines the Test Limits: Here are a few factors which can impact the selection of test limits when extremely small defects are required. (1) Gauge capability, (2) correlation between production tools, (3) correlation between production measurements, and customer measurements, and (4) Cpk.
Perhaps you can think of other factors.0February 21, 2009 at 4:32 am #181543
Nice job. The example really brings home the point. The correlation between factors A & B is very good. The p-value for the population means is not statistically significant. Yet, the p-value for the paired analysis shows a statistical difference. I used excel for the example and these are the number that I obtained are below.
Here is the major question? Are you trying to show that the mean of two different populations are statistically different? Note: A linear regression with a good correlation coefficient does not mean that the intercept and slope are very near 0 and 1 respectively. A difference in either the x-intercept or slope from 0 and 1 gives a great correlation coefficient but the means of the two populations are statistical different.
Perhaps your customer was saying that the individual measurements are correlated. For example, equal numbers of samples were exposed to the same treatment (heated 45 minutes at 350F) at the same time. In which case, the great example given by Bower Chiel reinforces the power of a paired comparison.
ANOVA not = var
0.0085387810February 20, 2009 at 9:09 pm #181539
numbers of samples were exposed to the same treatment (heated 45 minutes at 350F) at the same time. In which case, the great example given by Bower Chiel reinforces the power of a paired comparison.
ANOVA not = var
0.0085387810February 19, 2009 at 4:28 pm #181453
(1) What is COV?
(2) Your demand model is ambiguous. You state, delivered to a processing facility (on a fixed transportation schedule)… This is predictable. What is not predictable is the demand from your customer that determines how many items will be on the transportation vehicle. Is your demand a sine wave or a random delta function? Lunch and breakfast at McDs is not a random delta function. Have you tried to make a histogram of demand vs time during the day?
0February 19, 2009 at 3:47 pm #181446
A Bit of Slang: Wow! I like your detailed explanation. Understanding as much science surrounding a problem is a significant part of good engineering.
What is next: The next part of good engineering for Ks type of problem is to obtain data an estimate the risk. This effort is worth while from several points‑of‑view. Here are just few: (1) Ks company will have an estimate of the risk associated with their decision; (2) If there are customer complaints an 8D report can be assembled quickly a good 8D report can be the difference between keeping and loosing a customer; (3) It builds their data base and understanding of the passivation layer; and (4) it provides an opportunity to learn how to perform rapid reworks in house.
Thoughts: I wonder how many diagnostic Ks company has available. SEM, optical microscopy, cross sectioning, ultrasound, etc.
One More Question: What is the risk of a part becoming nonconforming with the 2nd passivation treatment. Since the passivation process appears to be more selective for free iron removal, my guess is the risk is minimal.
0February 18, 2009 at 5:07 pm #181400
Addition: I must modify my previous comment to more accurately state what I think you should do.
Customer Issue: to Passivate or not to Passivate a re-worked part.
Part of Solution: You need to perform the following experiments with (1) the original parts before reworking, (2) reworked parts, and (3) reworked and repassivated parts.
Experiment: Perform an accelerated reliability tests and see what happens. The copper sulfate solution suggested by Jsev607 can be coupled with a salt spray and high humidity high temperature for accelerated corrosion testing.
What about finding an autoclave and running the three types of parts (original, reworked, reworked and repassivated) for 100 cycles could this me done in two days (this would be like a HAST test with higher temperatures and pressures being better than lower pressures).
What I dont know: What does the passivation really do? Grow and oxide; if so how thick is the oxide and how porous is the oxide? Is the passivation process diffusion controlled; if so, the density of the oxide maybe a function of the oxide thickness? Does the passivation remove free iron which could result in two dissimilar metals in contact thereby initiating corrosion? How effective is the Citrisurf (which I assume is an environmentally friendly substitute for a more nasty acid solution like nitric) at growing a dense oxide and dissolving free iron?
Analysis: When you subject all three types of parts to the accelerated reliability test, you have a nice B vs. C comparison. Since you are short on time, some information is better than none.
Question: What would you do if a quick accelerated reliability test showed that reworked parts without repassivation appeared to look as good as the original part?
0February 18, 2009 at 2:44 pm #181384
Customer Issue: to Passivate or not to Passivate.
Part of Solution: You need to perform the following experiments with the reworked parts, and reworked and repassivated parts.
Experiment: Perform an accelerated reliability tests and see what happens, and be a little late with the delivery.
0February 16, 2009 at 1:01 pm #181220
Customer issue: FMEA produce “paralysis by analysis”
My Thoughts: A two hour meeting for an FMEA is too long. There are four major reasons an FMEA is done: (1) Customer driven, (2) audit driven, (3) management driven have to get it done, or (4) product quality team driven. Since the meeting lasted two hours, I expect that one of the first three reasons is why the FMEA was performed. If the FMEA meeting was called for any of the first three reasons, most team member will attend the meeting with the idea of making the product better. On the other hand, the underlying attitude and procedure is likely to produce results that do not capture all the critical to quality issues.
The purpose of the FMEA is to capture all potential failure modes. If you create the Ishikawa diagram and narrow the failure modes to what you believe are the CTQ issues, then you might as well do most of the FMEA yourself. You have removed the reason why teams are formed Diversity.
Like other teams, the FMEA team goes through four stages: forming, storming, norming, and performing. Because most FMEA teams are formed and disbanded very quickly they seldom become truly effective at performing. Consequently, CTQ issues are missed. This adds to the attitude of many managers that FMEA are of little value.
Solution: The best solution given the constraints of the six sigma discussion forum and reasons 1, 2, and 3 for performing an FMEA is to break the FMEA meeting into three 40 minutes sessions held over a two week period. Break sharply at 40 minutes or less.
Expected Results: The three 40 minute meeting forces the team to move faster; consequently, a two hour meeting with a team that performs better. For a product with little complexity, you will have done a pretty good job. For products with a large degree of complexity, you will have added to the belief the FMEA have little value.
Question: What was your reason for doing the FMEA?0February 13, 2009 at 5:06 pm #181174
Customers Problem Statement: The issue of course is that if we only consider monthly volumes of received/defective parts we get PPMs that are all over the place, for instance.
Objective: Establish a metric that is of greater value to management.
Assumptions: (1) You are working to improve supplier quality. (2) An incoming inspection is used to reject or accept the incoming lot. The lot has been accepted. (3) The number of parts released monthly into production is a known quantity and equal to N. (4) The number of defect found each month is D. Other assumptions are important but these are enough for now.
PPM = (1e6)D/N.
Step 1. 200 parts are released into production, N=200. 3 defective parts where found D=3 during that month. Defective PPM = (1E6)(3)/200 = 15,000.
Step 2. This could be a statistical outlier. Therefore, you will create a bar graph with two output variables and month as the input variable (excel works very well). One bar is the monthly PPM. The other bar is the cumulative PPM. At the end of the 12th months, the cumulative PPM is the sum of all the defects time 1 million divided by the number of releases for the year. The cumulative PPM approaches a value for typical performance ability average PPM. If you have sufficient historical data the total number of parts released and total defects could be used in a Chi-Square analysis to see if process improvements are statistically significant.0February 13, 2009 at 12:10 pm #181157
You guys can laugh at the following statement — you made a mistake when you read my statement.
I said: Using interferometry based contact gauges, I can easily see a resolution of even less than 0.12 um = 1200 Angstroms.”
0February 12, 2009 at 3:56 pm #181106
I have been busy too busy to see your response. Now, I am back.
1. Dan — I expected that you knew the answer to the question, “what is a mechanical gauge?” I was hoping that some of the others would reveal their technical knowledge by answering that question.
2. Based upon Dans description, I would broadly describe a mechanical gauge as tool that (A) make physical contact with the object and (2) the unit of measure is length.
3. With this definition, the displacement of the gauge head (that makes physical contact with the object being measured) relative to its zero position may use a vernier scale, encoder (electrical or optical), or maybe interferometry. With interferometry, I can easily see resolution of less than 0.001 mm if someone learns how to properly make a measurement.
4. Since know one replied to another posting that I made, I will repeat it below. I do this because it shows how easily a Six Sigma team can loose sight of the purpose of the Quality Improvement Project. Of course in a real team, the team leader and/or facilitator, which maybe the same person, would keep the project moving in the right direction.
Repeated from Another Posting
Now that this discussion has traveled a tortuous path, let all of us go back and see the original question.
The Original Question by Paula Dooling: The automotive industry is driving toward very small connector systems. With the small requirements comes measurement concerns on extremely tight tolerances. I was wondering what type of measurement equipment would be able to measure of product with tolerances of +/-.01 with good GR&R results. Any ideas or advice would be welcome?
The automotive industry uses lots of plastics. Some of which are very soft. I have found that obtaining a good GR&R on a soft plastic is not easy. Making a good measurement to 0.001 inches using a mechanical gauge (for example a digital bore gauge) deforms the plastic which results in a large within part variation and a poor GR&R. One quickly learns that selecting the correct gauge becomes very important.
No one bothered to ask Paula what type of material was being measured! Let us not forget that there is a difference between the gauge and the part that is being measured. The two (along with the operator) form a system and the results of the system measurement are what the GR&R measures.0August 13, 2008 at 11:29 am #62063
Black Belts lead teams to success in strategically critical initiatives throughout healthcare’s value stream – from concept of care through healing. They are highly trained in the Lean Six Sigma methodology proven to achieve breakthroughs in quality and productivity.
Maybe this gives to you some words to wordsmith for your needs. If your BB’s are experienced, change trained to “Skilled”.
. They Does anyone have a good “elevator speech” for what Black Belts do in a healthcare organization and what is Six Sigma and Lean. We are working on our “elevator speech ” and wanted to know what other Black Belts are saying that is short and easy to understand for all staff in the organization.0June 27, 2008 at 12:15 pm #173299
Please keep this discussion in context. Mario starts his post with “The Story goes like this”.
My post was in no way directed at any individual nor did I question anyone’s integrity. Especially Mario’s. I responded to a STORY not to an individual. If you look at my original post the opening was not addressed to anyone.
I am getting out of this forum before I get virtual injuries.
Too rough for me.
Regards0June 26, 2008 at 1:54 pm #173252
I knew this would somehow be construed as blasphemy of the religion. Explain ” You don’t what you are talking about”0June 26, 2008 at 12:27 pm #173243
The invoice should have read: One days work…………..$1500.00
Pointing out how incompetent you and your staff are………….PRICELESS
This demonstrates just how bad it has become in our business society today. That we must rely on someone that has a 1,000 acronyms after their name to certify them as a WB,YB,BB,MBB (Who certified the certifier by the way?) It is highly unlikely that Six Sigma Methodology was implemented in this case. It would take several weeks of non-revenue producing DMAIC(How do like that acronym?) to come up with the solution. The 2 by 5 minus 1 factorial wouldn’t get past the tip of the iceburg. If this case is true (doubtful) it’s not an application of Six Sigma. It does indicate that far too many people are educated beyond their intelligence.0June 11, 2008 at 6:16 am #57580
Hi, Nusha. You see, the Central Limit Theorem rises again. Let me know if you need more help.0May 15, 2008 at 3:46 pm #171998
Looks like I could use this too. Please send to [email protected]. Thanks!0April 18, 2008 at 9:39 pm #59339
I would appreciate a copy of the Card Drop Game too.
Rick0April 18, 2008 at 7:38 pm #59338
I’m adding my name to the chorus of requests for the articles.
[email protected]0April 9, 2008 at 5:15 pm #170851
Please send a copy of your Sample Size document. Thanks
[email protected]0February 5, 2008 at 12:55 pm #168180
It sounds like you work in the manufacturing area or production line of a small business.
Some things cannot be taught to online operators to look for, such as reliability testing. The fact of the matter is that when it is highly critical that the product you manufacture hold very tight tolerences, you cannot always rely exclusively on your vendors to supply you what they say they are supplying. Suppliers have been wrong in the past and will be wrong in the future. You want to make sure that the vendors potential veriability in their process does not enter your process stream, this is why it IS necessary for Quality Control checks beyond that of what the eye can detect on your production line.
Get it?0February 4, 2008 at 3:35 pm #168143
It is true, QC is not a value adding entity. It is a function that manufacturers put in place so that their customers do not have to. A lot of customers require this of their suppliers so that they do not have to absorb the cost of having an incoming inspection operation.
Rick0December 12, 2007 at 9:39 pm #57466
1. For asking the question. So many people just assume it is half. Wrong.
2. For Adam’s response. On the right track
Can it be estimated, of course is yes, but you want some guidelines. Adam’s point about variables that can affect the dispersion bi-weekly that may not show on a weekly basis is right on. So, this is what makes the estimate more difficult. So many demand variations occur at month-end. This “Month-end” mentality suggests your question should be also which bi-weekly period within the month cycle.
Possibly in your favor is that you have the data to calculate the standard deviation correctly. If you have weekly data, then bin your data bi-weekly. 2. Rrun an SPC chart to look for stability in time series order. 3.Look for any pattern of “Month-end” cycles or other patterns that make the standard deviation vary bi-weekly to bi-weekly. 4. Also, one could use X Bar and R and subgroup on bi-weekly to see the patterns, if any, in the charts. 5. If stable and no patterns are evident, then calculate your standard deviations between these bi-weekly subgroups.
Now, I assume you thought of that, but you don’t really have the data. So, intuitively, you estimate the standard deviation over a longer period of time to be less compared to the mean demand than the standard deviation compared to the mean of a shorter period of time. 6. How much? Well, I would empirically answer this as you get data and make a stab at it now if you have no data at 70% of the coefficient of variation (St Dev/Mean) of the weekly. In other words, if the COV is 50% (Which I find often for high selling products) on a weekly basis, I estinate the standard deviation at 70% of the 50% COV for bi-weekly. But, I bet you have cycles, month-end and day of the week patterns (Especially on consumer luxury goods) that make this estimate tough. Let me know if you need more.0October 5, 2007 at 12:22 pm #64857
0August 23, 2007 at 2:08 am #59145
I just had to share an idea on this creative issue. It very much is a SS opportunity being an MBB with a real estate license long ignored.
I like the prior post about looking for the x’s that are going wrong. You all know the three most important x’s in real estate. Besides location, the current economy with companies like Toll Bros. seeing cancelation may even override the “Big 3”. Fact-based data driven is definitely worthwhile and a good realtor is usually pretty good at getting the right facts on comparable sales and helping the homeowner with the right economics for the location and local economy.
A little 5S may also be in order if it is a rental unit, if you know what I mean.
You can’t do much about the Big 3, but there are other x’s with price and selling talent high on my list. Get the SIPOC and Cause and Effect dusted off and include your realtor and a likely buyer in your study – yes – a likely buyer. Have fun and let me know how it comes out. I have absolutely no experience in trying SS to selling a house, so “Buyer beware.”
0August 23, 2007 at 1:56 am #59144
OK, folks. Seems like we are all in this together. Sorry I did not see the requests until now.
Please send an e-mail to me at [email protected] and I will reply back with an example.
Good luck and hope this helps. Again, I am passing on a spreadsheet version that the original requestor wanted. I do this spreadsheet version only if my process is so complex and horrible in multiple interactions with my customer. Let’s face it, if you are interacting frequently with your customer or supplier, something is probably wrong. Use this VSM to “See” as Rother and Shook say the waste and high potential of disturbing your customer. Use the icon method if at all possible. We use only nine icons and I have never needed more for non-manufacturing VSMs.
0August 23, 2007 at 1:48 am #59143
Tkae your biggest issue and apply Six Sigma and solve it. One can not lose if you apply Six Sigma to your important but one of your worst suppliers. Six Sigma’s fact-based data driven methodology is a win-win for suppliers and purchasing if used well.
Your message suggests to me that you desire to demonstrate the value of Six Sigma to others outside of corporate. And, secondly, you want to train associates in purchasing and wonder who might be a good source. Again, I assume you mean if you should train as Green Belts or Black Belts or some other.
Depending on your sponsorship and experience in your Black Belts or MBB’s, one might start out with one full time “Best of the Best” future leader Black Belt and one big issue. Get a win and then develop more Belts, Green or Black, but no more than you have mentorship and projects.
If I did not interpret your questions well, just write back and I’ll try it again.0June 28, 2007 at 4:00 pm #158025
Thanks for the prompt response. We currently do not have any Six Sigma training or even a formal program, yet. The way things typically work around here is that we develop programs at the grass roots level (lean, six sigma, KPI metrics, etc.). Once we demonstrate the value of these programs, then other groups jump on board. I know, it’s upside-down, but it is the reality here.
Given that this is the case, the Green Belts will be part-time. Leading projects? Not sure. But, it is probably a possibility for those individuals with the right aptitude.
I have the ASQ BoK, but was looking for suggestions and insights.
Thanks again.0May 29, 2007 at 2:42 pm #156703
from the customers standpoint you have exactly one opportunity per unit for a defect to occur as the customer looks at the complete product.
How many opportunities you have in your process is irrelevant.
So an AQL of 2.5 is really about a 2 sigma level
0May 27, 2007 at 6:22 pm #156638
If you can understand than don’t respond ..0March 30, 2007 at 9:09 pm #154231
Thank you that sounds right!!!
Rick0March 30, 2007 at 3:11 pm #154209
Yes, I have. But now, i want prove it statistically that difference is not significant. Any idea?
Rick0March 27, 2007 at 3:03 pm #154044
What I meant to type was “RPN=Severity*Occurance” not “RPN=Severity*Detection”. Thank you for pointing out the error.0February 9, 2007 at 2:44 pm #151802
Attaching FMEA results and exceptions to the SOP……….I find that quite innovative………has it helped?0February 8, 2007 at 12:51 pm #151743
Thanks. Which application is this Justin?0October 5, 2006 at 12:18 pm #144299
[email protected] – Thanks0September 30, 2006 at 7:28 pm #144082
As an MBB working with the very industrial companies mentioned earlier (GE and Motorola) for over twenty years I leave you all with one idea. Learn the truth and purpose of SPC with the methodology of finding root causes from Six Sigma and the methodology of Toyota’s Production System and you will differentiate your quality and profitability from where you are today.
Don’t wait on your boss. If he/she doesn’t understand, competition will take care of that issue. Observe and share the success from your implementation and recognize the team members that helped. Reading Wheeler is a good idea, too, and get out and see real successes.
Tell us how it goes. Good luck and keep balancing asking with trying. Get started tomorrow.0September 29, 2006 at 1:11 pm #144012
Hello, I have a tendency to agree with your comments “I have spent time ,money and efforts to perform such a unique PPT”. However your comment “If you like a watch,a car,a ring,a book,a meal…you would be ready to pay for it”, is not lodgical as in this case there has not been the availabliity to preview, see, hold or given a method to gain an understanding of what is being purchased. This web site is not intended to haulk items but rather for a group of professionals to communicate and assist others in the advancement of our chosen profession.
Everything seems to be expensive ,but the “Intellectual Properety” seems to be free???I wonder why,why and why?Wish to receive an answer0September 28, 2006 at 2:21 pm #143943
Hello, Is your change management ppt. available for sharing?0August 7, 2006 at 1:47 pm #141486
Taiichi Ohno died in 1990. What would Ohno-san say? Continous flow first through JIT and Jidoka. Takt time does not have to be a constant. And remember, production smoothing comes before pull. Taiichi says that variation greater than 10% will cause issues. The solution is to understand demand patterns (Not what you want to ship and not historically what you shipped = sales$) and create continuous flow as much as possible mirroring that demand. If demand is still not smooth enough due to high seasonality, consider counter seasonal products to smooth demand. If all else fails, smooth production using Takt for the period of horizon that you want to consider. Takt time can vary day-to-day, month-to-month or any period one chooses. The key is to pace your assets and flow to Takt.0July 26, 2006 at 3:37 pm #58843
One can do four parallel paths, but it does get messy in the classic VSM style. If there is one or more paths that are relatively irrelevant to your improvement need, then it may be OK to simply omit them. If there are all important, then I offset them slightly to allow the timeline to show them. The critical path is the longest time of the paths and is a good place to focus.
Another way of showing parallel path processes is to use a vertical VSM template that allows columns to show the v and nv times by step. This method is useful when there are numerous interactions within a process flow such as interacting with a customer and/or supplier to the process.
The key is know what metric is important to your improvement goal and simplify your map to focus on the important contributing areas of your process. Keep it simple.
Hope this helps.0July 18, 2006 at 10:34 pm #140605
I am looking for a Quality Engineer with Shainin experience. It is a temporary assignment in Louisianna. Does this interest anyone?0July 5, 2006 at 4:46 am #139939
Common question and don’t forget your Lean. Stockouts are a function primarily of mean demand, variation of demand and frequency of receipt. One will hear that the leadtime is critical. Not really, unless the demand varies unpredictably. Note that I say unpredictably. Trends, seasonality and cyclicality are NOT unpredictable. Special events are often predictable, too – AND manageable. Consider plotting the actual demand of the finished goods (Independent demand) on a statistical process control chart (Ignore those that get hung up on normality at this stage.) Then, extend the demand of the components or raw that you want to start improving (Dependent demand.) Oh, by the way, get ready to turn off MRP if anyone is using it to push raw inventory into your system because we are setting you up for a much simpler and more effective pull system. Maximize the frequency of receipts immediately. Now, plot the demand in buckets of the replenishment leadtime on a histogram. Send it to me and I will tell you exactly what your reorder point ought to be and the average inventory you should expect across ordering frequencies. Hide any identification to your product line, please. [email protected]. My name is Rick and glad to help.0July 5, 2006 at 4:34 am #139938
One might consider the need to recapture the voice of the customer when deciding to improve the existing process or a more major recreation of the process or an entirely new process. If either of the latter, then DMADV is a good answer. So, what is the answer to “What is the difference?” The greater need for VOC suggests considering more effort into VOC and consideration of Quality Function Deployment followed very soon by a Value Stream Map of current state or jump to a future state if no process exists. One can see I am suggesting a mix of classic DMAIC tools and Lean tools. Makes sense when you think about creating a new process. What does one want in a new process? A process that is high quality benefitting from DMAIC and a process that is very productive (Lean.) Depending on the outputs desired, I would focus on Takt time, smoothing, continuous flow and standard work once the “Right way” is discovered. Good luck.0June 26, 2006 at 7:35 am #139588
While I agree that 6S and Lean are OFTEN mistaken, for a given process, such as the one mentioned here, defects could be seen as the unnecessary steps in the process.For example:
SIX SIGMA: Out of a million parts made, how many of them were made without wasting time doing extra deburr?
Project: Ensure operators replicate the same process every time (eliminate variation in the process)LEAN: In the making of parts, how much time is wasted doing extra deburr?
Project: Eliminate the time wasted doing extra deburr (eliminate the waste itself)0April 24, 2006 at 11:48 pm #136779
Interesting how the answers went from accounting for cash flow impact to reinforcing continuous improvement. I will take all of the continuous improvement I can get, especially on the bottleneck (as one author points us to.) We might be a bit naive to think we actually can measure true cash flow for many improvements involved with so many other dynamics and variables hitting a business simultaneously. Enough improvements with good cost control will always win – even if the “Adjustments come in steps versus at each improvement.
I am the MBB for Finance, in case you want to check any bias I might have.0April 6, 2006 at 12:58 am #58758
We would start with the customers to assess how we are doing.
Quite likely, there will be gaps in quality.
Some debate may occur if the gaps are primarily quality issues or time-based.
Simultaneously, read the criteria and starter kit for the Malcolm Baldrige Award (Free on the web) and see the structure it provides
Decide if you want to follow its structure.
Going back to the major issues with customers, choose Six Sigma if you want to focus on quality issues. Consider starting with Lean if focusing primarily on time-based issues.
Eventually, get experience in both methodologies and advance your score in MB. Don’t go for winning it – just try to get above 400 as quickly as possible.
That is what I do.
0March 23, 2006 at 8:03 am #135402
As opposed to what the others have mentioned, I can see value in knowing the sigma level of an organization.
– what can be measured can be improved –
Although, from that perspective you should either focus on one specific defect that you suspect may have an issue, or take a higher broader viewpoint and consider each unit as an opportunity and if it is rejected/defective, count it as such no matter what type of defect it has.Hope that helps.0March 16, 2006 at 3:20 pm #135156
Good answer from Adam. We typically show a one-time balance sheet reduction of the value of the inventory (Average current aftger the change minus baseline). Some companies will categorize this as “Soft” savings. The hard savings is the annualized savings from the reduction of inventory carrying expense. Typically this is the interest rate on the capital which is usually these days from 4% up to over 12% based on the firm’s situation. Finance is the place to get this validated. SOme companies have so much inventory, they actually get savings from leasing or selling space freed up. This is hard savings. Some also report savings from less material handling (We actually eliminate stock rooms adn a host of costs when we implement continuous flow and pull.)
Great to see the pull happening fo you.
Rick0February 28, 2006 at 6:00 am #134376
MR lam gives a good response. An MSA is always required if you consider the purpose of an MSA is simply to ensure truth in measurements. An MSA may be as simple as subjective validation that the measurements are believable and acceptable amount of risk of being wrong.
Gage R&R is the more comprehensive MSA dependent on continuous data and very useful in you are concerned about variation from poor repeatability or reproducibility. Sometimes, two people clocking events is enough to judge truth. Think about the precision to tolerance when making this judgment. Is my potential for error much less than the tolerance I need to consider?
Be practical. If in doubt, test it out.
0February 7, 2006 at 11:43 am #133498
I fully agree with Pepe. I come from the Automotive industry and we are also using BSC (Balanced Score Cards) to make a comparison between our different production plants all-over the world. We even have conversion factors defined that enable us to score seat assembly plants and trim cover assembly plants, or metal plants, or battery plants with one another although the processes and end-products themselves are rather different. So I thinks this is the way to go. Define your KPIs that are making your final quality, define how they should be scored so that everyone scores them the same (otherewise your MSA would fail). Before you start using the scoring system, do a Gage Study to proove you can use the system. If everything is acceptable, fire away !
The conversion factors as we use them did come out only two years after we started using the BSC because that was the moment when there was enough numbers to start calculating these factors…
Regards,Rick.0January 16, 2006 at 7:38 pm #132476
Could you please send me the output of your Gage study, because I am using Minitab too and I can’t seem to find the column or heading you are talking about in you question.
It’s clear that the study variance SV percentage (%Study Var (%SV) column should read less than 10% on the Total Gage R&R line.
Also look for the line ‘Number of Distinct Categories =’ it should be more than 6 too. Remember that you Range chart per operator in the graphical output should be in control (you don’t want data discrimination), and you XBar by operator should be out of control (you want more than the normal measuring range covered). A straight horizontal line on the value per operator sign is also a good thing…
If one of those (except last straight line) is not the case than you can throw away your result because they aren’t worth a thing. The gage is totally unacceptable for the reasons I have summed up. Everything needs to be OK. Some people tend to find excuses in order to talk them out, but what they are doing is fooling themselves.
I work for 2 auto constuctors in Europe (cannot disclose names), but the are all two German brands. I can tell you that the first time I would bring a result to them that didn’t fit the set out standards on all points, I would be butchered even during my presentation. Rules are ment to be followed and if one of them isn’t OK the total isn’t right.
Remember you are going to use this gage in order to do DOEs for example. Ask yourself this questions. Suppose you will have to make million dollar decisions on the output of your DOE knowing that the gage you were using isn’t realy a good gage ?
All I want to say is send me the output, so I can have a look. Preferably the graphical one too which you cannot send me this way, but tell me how they look like in words.
I’ll try to get you out of this, with the knowledge I have. From what you have told me soo far your gage is unacceptable.
Regards,Rick.0January 16, 2006 at 7:25 pm #132475
Your improvement might very well work with static images, where the sample presented to the vision system is always positioned in the same way, and that is surely the system you are using or mentioning.
In my factory we make carseats, these seats are never the same. We used to use a static system with reasonable results to check airbaglabels at covers. Now those labels have changed into little ‘flags’ sticking out of the cover in whatever angle you can imagine.
We had to get rid of our old vision system and start with a dynamic one with learning capabilities. It is a Siemens based system who are using software that is conceived and written at a USA based university.
This remark only to mention that you should always look at the usage of the system before deciding on what system to use. In my opinion investment in static system will not pay off anymore. Even in static environment the dynamic ‘learning” system score far better than the static ones. It’s really worth the investment. If we had known at our plant we would have started with the dynamic one right away. It would have saved us several ten thousand dollars on money that is now thrown away at an unused static vision system….
Regards,Rik.0January 16, 2006 at 4:49 pm #132470
Very good question, but can you give some more information on how you did the actual gage R&R.
The original SV of 52% is way out of line, and I do not really understand why just adding USL and LSS specs would have changed your final results so drastically, hence my question about how you did the actual study.
If the major source of variation is your vision system, then it is indeed a wise idea to pinpoint that variation and do the improvements before you buy the tooling. It is great experience gain for your supplier because he will gain more knowledge about the machine and enable better products in the future. For you it is good because you don’t need to spend money on that issue later on.
Regards,Rick0January 16, 2006 at 4:41 pm #132467
Thanks for your remark;
As I stated that R-Squared value was an indication of how well your regression line was fitting the datapoints. So adding variables that do not belong to the equation will decrease your R-square value if the datapoints added aren’t really fitting the regression line.
All depends what is your starting point for saying the R-square never decreases. If you start with all the variables than it never does. If you start with the wrong ones than it can only increase. If you start with a mix it can go up or down.
Other point, already stated if I am not mistaken: Who says your equation of your line going through your datapoints is a straight line? Perhaps non-lineair equations have better fitting to the datapoints.
Nevertheless good remark. Will do some research and see if I can find something to show my point. I am not an native English writer so sometimes I do confuse terms when I read things in English.I am sorry if this should be the case here.
Regards,Rick0January 16, 2006 at 8:30 am #132448
Good remarks although I just wanted to add that the meaning of the R-square and R-square adjusted values are just an indication of how well your regression equation comming out of the DOE points or in other words your regression line (read this well) is fitting the datapoints. If you want to do predictions with this ‘model’ than your R-Square value should be as high as possible. If not I personally don’t realy look at this value.
A low value could perhaps show that you have taken too much variables in the equation. Try squeezing out some less important factors, you’ll see that R-square and R-square adjusted values will go up.
If taking out the less important factors doesn’t give you a higher value than I do agree with the remark that you should look for other significant factors. In other words your pre-analysis phase for the DOE was not carried out correctly. This mostly tends (in my opinion) to Ishikawa’s that aren’t processed deep enough (you did ask why, why, why enough….).
Always open for remarks on this one …
Regards,Rik.0January 13, 2006 at 5:33 am #58697
When we have processes with interactions between customers or suppliers and us and we want to effect improvements in the supply chain, we use different maps. These VSM’s are not like Learning To See in several areas and that is why they work better for some transactional projects. Reasons: Learning to See examples are effective for one process with little interaction between customer and supplier. The beauty of these transactional VSM’s is we quickly learn to see the process value and nonvalue streams. Similar maps can be found in Lean Solutions by James Womack. Lean Solution VSM’s depicted show the interaction of “Consumer” and “Provider” so common in Finance, Sales, HR, Legal, Marketing, Counselor, etc transactions. Here is how we do VSM.
1. Simply walk the process and record the “Touch Time” of everyone in the transaction. Have a column for the tasks for the consumer, a column for the provider tasks, columns for the times by each task and we add a column for cycle time to record the wait, queue and setup time, if any. This gives the total cycle time and the touch time. Two other columns give us which tasks add value to the customer and a column for arrows to get the benefit of a “Spaghetti” diagram. The spaghetti diagram is a visual and helps us see the waste of motion back and forth in a transaction.
Hope this explains it well. I can send VSM examples if you write to me at [email protected]0January 12, 2006 at 8:24 am #132274
I think there is a lot of truth in everything being told here, but keep in mind that as Green Belt you are not to teach the DMAIC methodology to your team members. You have or will follow a three or four weeks course. I don’t think the meaning of the course is to teach you teaching the methodology to others, it learns you to use the DMAIC storyboard and the statistical tools that goes along with the methodology.
Sometimes it is a good idea to clear some things out for your team members, although in my humble opinion you should leave that to Black Belts and Master Black Belts. Remember the ‘Belts’ are there to ensure that the DMAIC process is followed. They do guide the team because the expertise should sit in the team not with the ‘Belts’. It’s good that you understand what they are talking about, but still the finesses are known by your team members. I have done several projects as Black Belt where I didn’t have a clue of what my team members were talking about, but we did succeed doing great things because I and the process owner had choosen THE experts to get into the team. I listened to what they had to say and tried to fit it into the storyboard. I haven’t come to one example that didn’t work so far !
The idea of using real day-to-day examples is also good to show the general usage of DMAIC. That should give your team members confidence that it’ll work for them too.
Last remark. Don’t pull yourself down. Don’t think you are bad teacher. It’s not because you think you are not a good teacher that others have the same opinion, and even if they have the same opinion try to get some input from them in order to learn how to teach in a better way. In the end you will be able to teach better…0January 12, 2006 at 5:21 am #132267
Someone is really upset about India or Probably Tamilnadu…….?
Anyone Knows.0January 11, 2006 at 4:41 pm #58692
Very good question and a popular one, too. We have added a dimension to VSM that may help you move from manufacturing to transactional. We have found mapping the customer and supplier value streams with your process is useful when there is significant interaction WITHIN your process and/or your scope includes reducing the waste your customers and/or suppliers are experiencing. The VSM’s often depicted in amnufacturing may not emphasize the significant wastes customers and suppliers also experience.0January 11, 2006 at 10:56 am #132186
0January 11, 2006 at 10:35 am #132183
Well, that’s good to know.0January 11, 2006 at 9:48 am #132180
Where is Tamilnadu? In Russia?0January 11, 2006 at 9:25 am #132178
Thnks a lot. That would help0December 20, 2005 at 4:37 pm #131403
Totally agree on previous post guys. Take it from someone who was there (the conculting business) not in the Six Sigma arena but the IT arena. It’s very hard and I am afraid you need to quickly expand in order to be able to take some hits if they come. If you don’t you need a lot of cash reserve to be able absorbing those hits (as said).
On the other hand I do agree with the idea that transactional is going to be more and more important in the future, but than again you will be dealing with computer systems or things that are mostly related to computer programs. And who do you need for that ? Right, IT people. That is what has driven me from the IT-world to the Six Sigma world. I had a 15 year carreer in the IT business and am now starting to understand the whole Six Sigma game. Another two years of experience and I think I will hit the consultancy world again but than with good knowledge of both IT and SixSigma world. Should give me a distinct advantage over the competition, don’t you think ? If you’re interested in talking to me please leave your E-mail address in reply, I will get back to you. If your looking for someone with the business knowledge (IT-related) than certainly we need to talk…0December 20, 2005 at 4:21 pm #131401
We use sigma calculators that can calculate overall sigma levels out of different processes, but I am afraid I cannot get you these because this copyrighted and confidential materials. Nevertheless I will try out what’s behind the calcs and try to update you on that one if I find out.0December 20, 2005 at 4:16 pm #131400
Problem was not the fact that operators measured the samples with too much variance. The variance was too small. The same sample was consistently measured the same, which brought my R-bar value down drastically and by this the UCL also goes down because it’s just multiplied by a constant value dictated by the size of the subgroup. Apparently this is called data discrimination and can only be solved by using a more accurate gage, which we have done and guess what : Our problem is solved and the Study Variance is even better than the one we had with the first gage… But thanks for your advice anyway.0December 20, 2005 at 9:35 am #131389
I think I have the same problem in a Gage R&R study. Everything is OK (number of distinct categories, the whole range is cover => XBAR-chart per operator is ‘not in control’., …) Only problem according to my MBBs is that the RBAR-chart per operator is out of control. The reason why the RBAR chart is out of control is because the operators had almost no difference measuring the same parts and they didn’t make much mistakes, meaning that there wasn’t much variation in the measurements between them. Should we therefor not accept MSA ? I thought MSA was all about looking if a certaing gauge would produce the same result whatever operator did the measurements. If there isn’t much variation between the operators and they all do measure the same part the same. Isn’t that a clue that your gage should be OK ?0September 15, 2005 at 4:43 pm #126917
Senthil mentions a famous company on the subject. You may find them mentioned many timems in Womack’s new book, Lean Solutions.
The “Newsboy Model” of inventory management describes somewhat your situation and offers an algorithm to consider. Of course, your major assumption whether sales is really lost is a major issue with any inventory management problem.
Lean consumption and Lean provision are the ultimate answer. To get there, don’t forget about the statistical reorder point formula ROP=(xbardaily demand)(Days between replenishments) + (Std. Dev of demand between replensihments)(Z). I use Z=3 for 99%+ service level).0June 24, 2005 at 6:34 pm #122153
I use the JMP software and in their help menu they give the following values. Hope this gives you another perspective.
< 10% excellent
11% to 20% adequate
21% to 30% marginally acceptable
> 30% unacceptable.0June 21, 2005 at 6:17 am #121850
Hi, if you have received the info, I would appreciate if you could share it with me.
Please mail it to [email protected]
0June 2, 2005 at 1:53 am #56678
Bob J’s advice is good. We use P charts for when we only have attribute data on the output. Focus on first time through yield, especially if no calibration is done (Adjusting the output due to uncontrollable input variation.) Use variables type data whenever available.
We typically use P charts on the output First Time Through yield to and quickly try to find the input(s) most affecting the failures. The output SPC should be posted real time and in the process. Get the process operators to react to the P chart and suggest the inputs that are causing specila cause variation.
Get SPC on the inputs quickly and control them and then consider eliminating SPC on the output and input if now fail-safed.
Trigger yield limits, I assume, are early warnings before customer requirements are at risk. The control charts should be sensitive to trends as well as any point outside of the control limits. The control limits have nothing directly to do with the customer spec’s, but they warn when a change in process is possibly occurring. Center your process, reduce variation, achieve capability adn then use the SPC out-of-control events to be the trigger.
Let me know if you need more.
Rick0April 28, 2005 at 3:12 pm #118593
Z’s response is correct. It is a copout of leadership not to provide operational definitions of quality and expectations of everyone in achieving quality. And, the resources for them to do their job.
In addition, the quality should be assessed by the persons in the process. They should have the training and resources to assess their won quality.
If this is not convincing enough, collect some data.
The data to collect is the ability of the “Experts” to repeat and reproduce their judgments of quality. Perform an attribute agreement analysis (Minitab has a really easy process) with these experts. Use samples that represent the difficulty of finding multiple defects per service/unit. Share the results. If the experts fail, lesson learned. If they pass, then use AAA to certify the people in the process are capable.
To convince0April 15, 2005 at 1:26 am #117794
We try to create a DMADV relative to the business. For instance, we created a training exercise for designing and racing pinewood derby cars for an automotive supply firm. The car kits are readily available from the scouts and Hobby shops and are cheap. Once can include many DMADV tools including QFD with the scout as customer and team as car developer, Mapping, C & E, FMEA, Hypothesis testing, DOE and validation by racing.0April 12, 2005 at 4:36 pm #117627
I hope the ideas helped. Vinny – not sure your question. You may contact me at [email protected]0April 6, 2005 at 3:56 pm #56620
Good and common question. Idea generation has been a common measure as a leading indicator of continuous improvement driving to operational excellence. Of course, idea implementation is clsoer to “The end in mind” of favorable impropvement and this metric is also used. ONe of hte most popular methods if measuring idea generation using Lean and Six Sigma is to post the Map of the process’ value stream in the process and provide plenty of sticky notes and pencils for people to post ideas and improvements. A couple of major auto companies do this well and my companies.
Another metric to consider is always customer satisfaction improvement – commonly hard to identify hard savings but very important, we would all agree. A well designed survey and good statistically based sampling can make this metric work for showing excellence in the entire operation including sourcing through reverse logistics.
Lastly, maybe help us with your definition of operational excellence and you may get more fine-tuned tips.0April 2, 2005 at 4:06 am #117147
Before you get too far I guess I would ask why do we want turnover reduced? So speculating a bit…
1) Build a CTQ Flowdown to look at what drives turndown. For instance, its often a function of things like associate satisfaction, lack of promotion, salary competitiveness, etc.
2) You can then develop a transfer function Y=f(x) that should help you get at COPQ. Without speculating too much the COPQ would equate to the financial benefit, what I often refer to as the Yf and the drivers of cost or lost profit. In this case ask yourself the hypothetical question of “What is I reduced turnover by 50%”? Annual training costs would be reduced by 50%, one time startup costs (office setup/admin fees) would be reduced by 50%, and lost productivity. This is a little trickier, but could be calculated as the difference in productivity of a longer term employee versus a new employee.
As an example, in sale process engineering we often find that it takes 12-18 months to “ramp” up a new employee to the average rate of revenue generation of longer term employees. Therefore, the COPQ of a lost employee is also the loss of the reduced revenue for the 12-18 months of retraining. Often in sales we also see customer loss associated with turnover. There should be enough data to show the “rate to average productivity” of new employess that would then help with the business case for reduced turnover.
Frankly, your bigger challenge is to figure out what is causing the turnover.
Food for thought.
Bank of America0March 28, 2005 at 2:04 pm #116914
The truth is unless you have a valid customer specification, you really don’t know. First, it sounds like you have an existing process for data capture and a lot of “improvement wishes”. This sounds like two projects I had when I first joined banking to improve CRM data quality and later when I led customer delight improvement for the Fleet merger (lots of client record clean-up!).
In both cases I had to step away from “data quality” and ask how the data was used in the context of a larger process (map the overall process first and figure out what the data was used for) and basically apply MSA (measurements systems analysis) to the work.
The most difficult step was determining the attributes of the data record. In service industries, like banking, its recorded information – such as name, address, etc. In product/process industries – harkening back to my old days with DuPont – its things like polymer viscosity, etc. The nuance is that in service industries the data is almost always the data – a name and how its spelled – and operator entry error is most often the issue (fat-finger keying or omission, changing dates on birthdays). In product/process you might get into tolerance analysis or propogation of error analysis to look at the uncertainty in a reported number. Whew!
The next step was sorting through levels of data accuracy. Is it complete (pass/fail – is the data there)? Is it valid (formatted correctly)? Is it “accurate”? This latter element often requires some level of auditing the data. For example, in a CRM system the only way I might know if I have Jane Doe’s address right is to ask her. In my days at DuPont when I often had 6 major global customers that was easy. In banking where we have something approaching 100 million customers with 3-4 accounts each we resort to statistical sampling for accuracy checks.
The next step was translating how many “defects” in the data set constituted a “defective record” as some data was “must have” and some was “wish to have” – particularly in the context of Patriot Act regulations in banking. Finally, we translated the information into a simple yield calculation (i.e. 91% of records were not defective) vs. a customer’s end-use process specification. This is where you can begin to the capability of your data to meet the needs of the process its serving.
Let’s say for discussion purposes that my “First Pass Data yield” was 50% (complete, valid, and accurate). Clearly this would seem a candidate for DMADV, if the customer specification is 90%. However, my coaching is do a blitz to determine if there are quick improvements first and then fall back to the DMADV approach. Often you will find one step in the error that is generating most of the defects. Mistake proofing often will work wonders.
Here are some great articles on data quality improvement you might find of use.
Cheers and good luck data mending!
Business Banking Q&P Executive
Bank of America0March 15, 2005 at 10:48 am #116387
As I was reading Implementing World Class manufacturing. I was noticing 6 sigma sounded very familiar. So I looked up in an old college text book Edward Deming and I find his theory is just about the same. Isn’t 6 sigma just a modern day version of the same?0March 14, 2005 at 1:53 pm #116341
Can employees doing the work in the shop be green belts? I’m talking intelegent employees that have been doing the work process. I’m always under the impression people who do the work can help your company save big time money. They are more familiar with the process and of faster and better ways of doing the process.0January 17, 2005 at 3:40 pm #113594
Good answer before. Plus, Minitab has a very thorough description of distinct categories with the math to explain the overlapping confidence interval analogy. You might also verify there is enough discrimination in your measurement system by reviewing a control chart using the range of variation within each person’s measurements. Look for 4 or more different values on the Y scale of the range chart in addition to using distinct categories.0December 21, 2004 at 1:57 pm #60413
Our health system is a venture of two previously competing hospitals. Both hospitals had quality methods a different levels of success. It was only when we adopted the Six Sigma methodology (3 years ago) as a system methodology that we were able to see the steady improvement with common objectives and project teams. Integration of Six Sigma or any improvement method, from what I have witnessed at 3 companies, always must journey through the forming, storming, norming and performing phases (with emphasis on the storming phase). Each organizations travel time through these phases occurs at different speeds and depends on the support of leadership. If your organizantion can build on some early, quick wins from Six Sigma efforts, once you get three years out you will see the change in the culture and acceptance of the method. Feel free to contact me to discuss further. 724-773-2017.0December 21, 2004 at 1:45 pm #60412
Feel free to contact me regarding your question on Six Sigma implementation and non-clinical/financial projects. We have had significant success in these areas from proper patient admission status to food service optimization, lab billing, reduction of non-billable patient stays, staff overtime reduction and the like. Look forward to talking to you. 724-773-20170December 15, 2004 at 11:44 pm #112476
Feel free to contact me regarding training resources in this area. 412-749-7009 Rick0December 14, 2004 at 3:19 am #58282
There are several methods you might consider depending on your type of business.
A lot of companies will build a matrix listing 5 to 10 points of value they feel will bring value. A lot of models focus solely on ROI or NPV calculations. This is typically not the best single measure.
Most if not all projects should be attributed to the strategic long and short term goals of the company.
You may consider creating a list of value items and scoring them and then apply a % of value and calculate a weighted score and sum the result.
Provide a scale to your scoring 1 to 3 little value, 4 to 7 some value, 8 to 10 large value.
1. Supports Customer Need Tied – 6 Wgt = 25
2. Aligns with long term goals – 4 – Wgt = 20
3. Economic Viability – 7 – Wgt = 15%
4. Supports regulatory Need – 3 – Wgt = 10%
5. Duration of Project – 8 – Wgt. = 10%
6. New Innovative Product – 2 – Wgt. = 10%
7. Supports CORE compentencies – 8 – Wtg. = 10%
This is only an example, but by creating a score you can quickly assess the value the project has.
Remember Little’s Law regarding Work In Progress/Projects in Process and limited resources.
Number of Projects / average time to complete = Lead Time.
Most organizations underestimate the time to complete and the number of resources required to implement a project. Without a sound means to prioritize projects you are never certain that the projects you are working on are the ones that will sustain the long term viability of your company.
In the end most companies should find it hard if not impossible to complete 3 to five large projects a year depending on available resources. Good managers make tough decisions regarding where effort will be spent. If you are having problems with run away project lists, chances are, your upper management is either a distant group of spectators or woefully inept in their ability to focus and manage resources or perhaps both. If so, it would do them good to attend one or a few TQM classes or seminars.
I hope this helps.0August 18, 2004 at 1:24 am #105895
thanks for ur reply. I agree with u partly. In our company, the printing operator have more than ten years’ experiences in printing aera. and would you like to give me some detailed case for study? you can email to me by [email protected]
Thanks0August 17, 2004 at 8:05 am #105829
for printing, it is really difficult to use SPC in process, there are two many element to affect the product quality, such as the operator’s skills, the difficulty of parts and condition of ur facility (Cleaning), capability of ur equipment and the performance of ur printed material.
I have ever worked for a printing company, and try to use SPC in process, but failed , if you have more action ,pls share with me.0May 14, 2004 at 11:38 am #100263
Is the power of statistics in solving problem being put to a poor use?
MOST IMPORTANT: The job of determining which cavity a part came from should have already been done. There should be method, number of dots, a number 1 to 12, etc that identifies which cavity the parts came from. Now, the problem is less of a statistics problem than what I believe has been addressed the responses. After separating the parts based upon which cavity they came from, how many parts do you have to test to receive a level of confidence that there are no issues with the parts.
SECOND MOST IMPORTANT: The molder, in general, should be able to provide help to solve this problem. Rather than mixing the output from the different cavities, they should place the output from each cavity into a separate container. Now, if you find an issue with the way the parts fit or function, you now know which cavity the part came from.
If the original experiment fails to match up the different parts, you may not discover the issue until it is too late. If the original experiment finds a problem, you still have to mold parts an have them separated at the time of molding by cavity.
0May 13, 2004 at 4:25 pm #100187
You have an interesting problem that you have been working on. I am surprised that you have not received a greater response.
Here are three comments that might stimulate additional thoughts.
1. It appears that the sample size is relatively small to draw strong conculsions. For example, for split trye chi-square = 4. With df=4, and chi-square = 9.488 at the 5% confidence level, there is no statistical difference between vechicle types. In other words, you can not reject the null hypothesis (Ho: cause A=B=C=D=E).
2. There is a real danger when statistics is applied to this problem. A simple statement to management that there is no statistical difference between the vechicle types for split trye could lead to a belief that there is no need to re-visit this question with a large sample size
3. Chi-square = 47.76 for the totals — statistically significant. The data clearly shows that some vechile types are worse than others. In the case of vechicle type B and E, what did you do to determine if there is a statistical difference between these two vechicle types?
I would be interested in finding out more about your final analysis.0May 13, 2004 at 11:06 am #100147
Stan is absolutely correct. The whole idea of a control chart is that it determines if a stable process is out of control. By changing the control limits for each set of new data, you clearly defeat the purpose of control charts.0