Forum Replies Created
Forum Replies Created
April 10, 2009 at 6:24 am #65271
Thanks a lot Don,
that was really helpful0April 8, 2009 at 5:03 am #65265
i have the baseline data reflecting 23% difference in pre and post review defects.
Also, if you can guide about he typical steps to follow as six sigma projects for the same, it will be great help
0March 16, 2009 at 4:30 am #182397
thanks guys for your reply,
I know that you all emphasised on need ofhaving knowledge of crystal ball. What all resourses I can see upon for the same.
Also, I would like few examples of situations where crystal ball can be aptly used….
Thanks in advance0December 15, 2008 at 11:00 am #178721
Neither. There is never a need to transform and outliers should never be removed.
Read Wheeler; Shewhart.0December 5, 2008 at 10:56 am #178352
Mario/GaryThanks. You’ve put my mind at rest.Tim0December 4, 2008 at 12:26 pm #178314
The reason that there’s so much store in the numbers is that it gives the managers something to use to manage the business that removes any interpretation. For a large business this is crucial – the org is managed by internal team benchmarking.If you can get the right metrics in place this works very well indeed. The challenge is to continuously evaluate the metrics and keep them ‘clean’. In the worst cases, as you say, they can drive bad behaviour. If the culture’s not right (inc. rigour about the numbers), there’s also a tendency for middle managment to become a process of ‘fixing’ the data, which not only drives the wrong business decisions, but is also a hugely expensive process in itself.0December 4, 2008 at 10:07 am #178306
Thanks, Andy. I now feel like I’m not necessarily barking up the wrong tree.What’s the answer to your questions?I’m more interested in the approach to:
– working out a relevant fudge factor for a particular process domain (I doubt that’s corporate specific, more likely business process type).
– understanding the benefit/cost of continuous measurement so that there’s no need for a fudge factor.My main problem domain of interest is the processes for managing IT in large enterprises (mostly post building the code). The common problem that I see is that the numbers used to run IT are very precise… And fabricated – so that they meet the goals, rather than providing management tool. Few people measure error rates, but I’m seeing 800k dpmo for many IT management processes, with correspondingly large increases in capital and operational costs.So, I’m interested in the value of data accuracy – hence my concern over such large fudge factors. I need to be able to justify these from a theoretical and practical perspective.I guess that it’s common knowledge in the six sigma community that the dpmo for 6 standard deviations is 0.00099, rather than 3.4. That’s quite a difference!0December 3, 2008 at 9:53 pm #178298
of course they’ll move (usually becoming worse as they aren’t being watched). That’s the nature of any time series data. But, as you say, the shift varies from case to case.So why hard code in 1.5 sd into the conversion tables and obfuscate the situation?Surely this just makes it harder to understand how the tools and techniques work as there’s a bit of ‘magic’ in there?Whatever. It’s the way that it is.0December 3, 2008 at 9:27 pm #178292
thanks for the responses. I know what the data mean. I’m not familiar with six sigma, but I used to build mathematical and statistical models of various types of processes (inc. chemical reactions, physical systems, and manufacturing processes).I found it very odd that a discipline that’s focused on understanding the quality of the data that it uses, and which, rightfully emphasises the usual statistician’s passion for accurate and consistent definition of the meaning of the data, then redefined the meaning the usual abbreviation for standard deviation (sigma).Surely it should be 4.5 sigma :-)If the term were used in it’s normal sense, there wouldn’t be any need for the conversion tables.I don’t have much problem with the meanings of the terms, I’m just trying to assess the commonality of understanding among the six sigma community.I’m also more than a bit worried about the size of the fudge factor (1.5 sd) – it sounds like an artifact of a limited sampling approach. Surely, if I can get in some decent automatic measurement, I can eliminate this: I can pick out the other factors and do some sensible time-series analysis to spot and eliminate any ‘shifting and drifting’ and remove such sources of variation like I aim to do with the rest of the six sigma toolkit?0December 3, 2008 at 7:20 pm #178286
ppm vs dpmo is a smokescreen, the point is that you cannot simply rollup the metrics to get a sensible number as you’ve no view on the significance of each process to the overall defect rate for the end customer. So the suggestion is that you measure that defect rate directly.0November 28, 2008 at 3:31 pm #178153
Excellent suggestions, Sue. These are helpful for anyone looking at on-line training. Sometimes it’s difficult to be a shopper for eLearning because it’s not an area we have experience with. These are questions every buyer should ask of the provider.
Dave – here’s a link to a site run by the US DoD’s Advanced Distributed Learning Initiative. http://adlcommunity.net/. You originally had asked for research, reports, etc. The ADL doesn’t study Lean Six Sigma training, per se, but is sponsored by the US Department of Defense and so is fairly unbiased in its reports/analysis. There’s a ton of good research there, and it’s acutally based on data (imagine that!). You may find Dr. Traci Sitzman’s work particularly interesting.
0November 26, 2008 at 7:16 pm #178121
David – search for “Blended” and like training terms in the forum. Here’s a link to one of the discussions: https://www.isixsigma.com/forum/showmessage.asp?messageID=87316. I know Doug personally, and I recommend you contact him directly. He has a great deal of experience and research (both his own and others) at his disposal. Although travel/lodging expenses are a driver of the choice you are facing, the data show that your current approach is likely more effective than back-sliding to a pure ILT model. I suspect, however, that your current model can be refined to perform better. For example, giving your students more time with the on-line materials, monitoring progress and performance during that time, and scheduling real or virtual study-halls for those who are lagging. I hope this helps.0November 26, 2008 at 3:17 pm #178112
Hi David – could you describe with more specificity what you do currently during weeks 1 and 2? E.g., do the students receive part of the curriculum during week 1 and the balance in week 2? What happens in the classroom? Lecture or activity?0November 19, 2008 at 9:40 am #57655
This sounds intruiging – any simulation or demonstration to increase understanding would be fantastic for me too.0May 15, 2008 at 12:52 pm #171988
Can I also have the ppt file?
Tim0April 29, 2008 at 5:14 pm #171549
Interesting exchange. I’m consistently amazed at the how individuals who supposedly make decisions on the basis of data will express opinions here based on a sample size of one.
Ron – I would encourage you to read department of defense data and analysis on the effectiveness of self-directed on-line training and reconsider your opinion.0April 15, 2008 at 8:17 pm #171210
O.K. – I can accept that. I suppose my assumption was that there was control charting all along, not that there’s just data that is now being charted and special causes assigned. Thanks.0April 15, 2008 at 7:51 pm #171206
Perhaps an equally silly question – does it really matter? If you’re looking at process capability based on a year’s worth of data, does the inclusion or exclusion of the relatively few data points change the respective values?0April 11, 2008 at 5:47 pm #171049
4 George – for insight to Master Dark Salmon Belt, visit http://www.ississippi.org (pronounced “i s s i s s i p p i dot org”).0February 8, 2008 at 8:59 pm #168383
you may want to consider steroids to enhance your skills, just like Barry.0December 31, 2007 at 6:21 pm #166652
I am in the process of setting up a new ERP “Syteline” in my company. I am useing a ERP system and changing the methods to a Lean Six Sigma method along the way. We started jan 07, go live date is exp 03,2009
I am looking for any and all ideas from those who ahve done the same. I need lessions learned.
thank you much
0December 4, 2007 at 9:12 pm #165717
Outlier – you did such a nice job in responding to the earlier post seeking information about “what is six sigma.” Perhaps you can prepare another response for this post that we can cut and paste ad infinitum. Thanks.0November 19, 2007 at 8:05 pm #165104
I would like it as well. TG0November 17, 2007 at 2:07 pm #164967
Tamlyn – the problem with the output of the ASQ CSSBB process is that it fails to eliminate candidates who are unable or simply have not completed project work. While the BOK is a decent recitation of substantive skills, ASQ has set the bar too low by merely requiring affidavits w/ respect to project work rather than delving into the work product itself. Reviewing the work and interviewing the candidate in some way to assure (1) the candidate actually performed the work and (2) understands it, seems to me to be critical. Also, the testing format is odd (no calculators in this day and age?). Consequently, if you consider the market’s expectations of what a BB’s capabilities should be as a specification, then what the chatter indicates is a process that generates too many defects. Some will be good, somewhat experienced black belts, but too many are not. A certification process should be biased so as to possibly reject qualified candidates rather than likely pass unqualified candidates.0October 29, 2007 at 5:26 pm #164097
I did not knowall of Reigle’s various incarnations, while I did enjoy the smack downs he gave to the many so called experts on the forum.0October 25, 2007 at 11:41 pm #163898
I agree, Mr. Reigle was a knowledgable contributor to the SS community, i’ll miss him.0October 15, 2007 at 2:26 pm #163119
KP – You’ll be able to write a book about your experiences, KP. Among the problems you may come to face in terms of applying CPI to a firm’s practice will be (a) processes that are not well defined; (a1) attorneys love exceptions to rules, so lots of exceptions to the processes that you attempt to map; and (b) resistence to meaningful measurement. Nevertheless, if you can convince the partnership that real money lies beneath your planned efforts, then you may be able to gain some cooperation. In other words, you will be dealing as much with culture change as you will with anything else. Among my suggestions for areas of pain/gain: recruiting and retention of attorneys; pre-invoicing write downs; delayed/untimely invoicing; processes for clearing conflicts of interest; and reduction of staff overtime. What sort of practice are you in? I am aware of case studies applied to an international trademark practice.0October 13, 2007 at 2:39 pm #163054
Tom – I understand your question as asking for typical projects found in BPO environments used for training. Shouldn’t the question be the other way around? What projects need to be done? Then which projects are suitable for training? Sometimes training ends up with the poor reputation of not adding any value, not resulting in any changed behavior, failing to result in business results. By completing projects that are meaningful to the business as a part of the training process, the training effectively pays for itself. So I recommend first ascertaining what needs to be done, then parsing the projects for proper scope suitable for training, rather than searching for projects of proper scope but little relevance.0October 12, 2007 at 9:46 pm #163049
Robert – the implication being that a number of independent batches/production runs be made to generate a sufficiently large number of samples? does this conclusion change if the process is continuous rather than a batch method? also, thank you for your continuing practice of providing sound advice. I always make a point of reading threads on which your name appears.0September 27, 2007 at 2:39 am #161846
Minitab 15 has a select optmal design which allows you to choose the
number of runs, and the terms that you would like to be able to
estimate. I believe this sounds very similar to what you are talking
about in JMP, but I’m not too familiar with JMP.0May 24, 2007 at 11:35 am #156468
Set temperature range to Low, Median and High or just Low and High (0, 1) do the same for the other categories.0May 24, 2007 at 10:40 am #156467
Adam, how do you get your process/es in control if you do not address the root causes of the special causes, eliminate them, and prevent them from recurring?
0February 7, 2007 at 5:11 pm #151694
You should consider speaking with Doug Evans, Director of Six Sigma Training for Quest Diagnostics. He has spent a substantial amount of time refining their programs and even has data supporting their changes in approach (imagine that!).0January 25, 2007 at 7:04 pm #151062
No problem, I’m glad you still posted your reply-I am very interested in the subject now also.0November 20, 2006 at 11:03 pm #147632
That last post had a bit of a pong!0October 2, 2006 at 6:13 pm #144183
Hi Hank. Hope all is well. Do you think I can get a copy of the Excel spreadsheet? thanks — [email protected]0September 28, 2006 at 5:47 pm #143961
Poor understanding of the AIAG elements, incomplete paperwork, poor communication to suppliers, material, problems with sub-suppliers, not used as a metric to drive better performance, poor drawings0September 5, 2006 at 12:42 am #142758
Thanks for the input. I already have most of the back ground that the training provides however most companies are looking for someone who is “certified”. I don’t want to spend tens of thousands to learn things I already know.0July 31, 2006 at 8:07 pm #141263
I took both my Self- Study Green Belt and Black Belt training courses from Cornerstone Six Sigma Consulting. They were both excellent. The courses were comprehensive and the material was straightforward and easy to understand. The prices are low and worth the money: $375 for Green Belt and $975 for Black Belt. Plus, one of the key selling points for me was that I could call an instructor if I needed one-on-one mentoring. None of the other Self-Study courses provided the mentoring from the instructors…so this was a huge plus for me.
I also know several other people who took their courses and really enjoyed them.
Their website is: http://www.cornerstonesixsigma.com/training_self.htm
0July 18, 2006 at 2:27 pm #140543
I second that. Toyota’s perceived quality is getting old.0July 13, 2006 at 5:51 pm #140310
Sam, so true! Great Quote.0July 13, 2006 at 3:03 am #140273
Hey there GE90. I really don’t personally know Dr. Harry but have heard a lot about him (mostly very positive). Seems Jack Welch thought alot of Dr. Harry. So much so that Welch selected SSA to be the consultancy of choice and cited Dr. Harry quite a bit in his autobiography and throughout GE Way book. Seems Dupont thinks alot of him as well. Why are these top executives aligned with him.
Perhaps you, Darth and Stan could tell us the references within which you guys are cited by several top business leaders (like Jack Welch). Could it be that you are intimidated by successful people and lash out at them because they achieved something you did not? Sure seems so by reading your posts.
I would warmly recommend that you focus on presenting your own contributions to the field of six sigma rather than knocking the well recognized effors of others.0June 28, 2006 at 7:33 pm #139753
Very insightful! These indeed are drivers as they affect overtime and premium compensation, especially with represented labor. This might drive you to consider working crews split shifts to avoid OT pay, but there are efficiency and safety issues with working around live power in the dark! While I’ve touched on this aspect of the problem a little, I’ve also recognized that the time the storm hits is out of my control. I can, however, control our “reaction” to the storm. Knowledge of these X’s may prove useful when we making decisions about the timing of our response. For instance, calling crews in at 5am vs. waiting for normal shift start might not make financial sense, unless you’re impacting a health, safety, or large commercial customer.
My project appears to have some lower hanging fruit than even this – waste in the form of wait time incurred as a result of our “all hands on deck” approach. That appears to be where my biggest savings opportunity exists initially.
I’m in the MidWest. We’ve been looking at season, temps, wind speed, etc and have developed a response strategy based on many of these factors. To your point, an ice storm that hits while many trees still have leaves on them, followed by heavy winds and cold temps = VERY BAD news for us! LOL
One of my peers has worked extensively on minimizing outage frequency and duration. These are both public service commission measures we are rated on. As you might suspect, the small, one customer outages don’t influence our peformance on these metrics as heavily as the mass outages that may last for several days. Massive improvements on the one and two customer outage situations have very little impact on this metric. However, minor improvements on the very large scale outages have very large influence on our performance vs. the metric.
Thanks for the feedback and ideas!
Tim0June 28, 2006 at 6:11 pm #139749
Yes, it is in fact a bimodal distribution as you indicate. Yes, I have separated the large and small storms . I’m also considering using median rather than the mode. One of the key improvement opportunities is around resource management. While it will have a positive impact on the small storms, the largest gain will come from the large storms. I like your suggestion. I may be able to find another “x” that has greater influence in the small storms and address that one as well, setting different improvement targets for the different types of storms….
Tim0June 26, 2006 at 3:23 pm #139605
You may want to check out N.C. State’s certification programs given your proximity. Good Luck.0June 23, 2006 at 6:59 pm #139538
I agree – Project Management is the language of implementation, so practitioners of any improvement methodology (CMM/CMMI, Six Sigma, Lean, TQM) which will likely require projects would benefit greatly from improved project management skills.
Check the following link for what I’m referring to on “belts” in the use of Microsoft Project – this could be the source of the confusion: http://www.iil.com/msproject/
Tim0June 23, 2006 at 6:52 pm #139537
Thanks Andy. You’ve hit the nail on the head. I appreciate your feedback and helping to clarify the difference between the average “cost per job” in aggregate and the average of the average cost for jobs in storm 1, storm 2,….
We do in fact have more “small” storms with fewer affected customers and fewer jobs. And, as you state, when we have the larger storms, there are more affected customers and more jobs to spread the costs across. However, several X’s are impacting the costs we incur forthis activity in larger storms, resulting in those costs actually being much higher than the small storms. For instance, that $93.85 average cost per job is across both small and large storms. When looked at separately, small storms average cost per job is $74.34 whereas the large storms average cost per job is $114.50! Even given more customers/jobs to spread the cost over…
One driver here is how we allocate resources during large storms – it’s more of an “all hands on deck” response with lots of wait time puncuated by periods of heroic effort to get the power restored.
In your opinion, which measure is better to use as a baseline for current state performance (and thus improvement target) – the average cost per job in aggregate or the average of the average cost per jobs in storm 1, storm 2, etc.? While most of my improvement efforts are focused on the larger storms, I do expect some improvement on the small storms as well….
Thanks again for your help and feedback. This has been puzzling me! LOL!0June 23, 2006 at 6:07 pm #139532
I’m a PMP also, working on my black belt. I haven’t seen anything like you’re talking about, however, I do recall some Microsoft Project certification levels such as orange and green belts. I believe International Institute for Learning (IIL) offers it – check their website. Also the Microsoft Project Association user group (MPA) may have information about it.
0June 23, 2006 at 4:27 pm #139525
26 is the number of storms we had in the past 2 years. During each storm, we performed this activity. I have a “spend” amount and a number of events (or jobs) performed for each of these 26 times over the past 2 years…0June 2, 2006 at 2:32 pm #138548
Thanks.0May 28, 2006 at 8:27 pm #138302
* The statement only says that 83 “claimed” to have watched — you need to know what proportion of Americans are liars! ;-)* You could just ignore the data and make up your own answer. That seems to be a Conservative approach to issues like global warming. ;-)0May 28, 2006 at 6:46 pm #138301
Since it is a homework problem, I won’t give the answer, but I will give a hint.Imagine you numbered the maps as you put them into the three envelopes.The first envelope has [ M1, M2 ]
The second has [ O1, O2]
The third has [ M3, O3 ]Now think about the probabilities….Tim F0May 19, 2006 at 8:18 pm #137849
Like you, I have over 20 years experience in industry of course experience will win out every time. Being certified is only the start and over time with more projects/experience one can only improve. However, in Europe many recruiters are actually looking for certification up front. It is like the PMI certification… having it will not make you a good project manager but it does intend to provide a standardised approach which improves the chance of success. Actually, in europe Prince2 is becoming the preferred choice (even though its origins are IT and telecoms).
On a final note, and getting back to the point, I am quite sure the six sigma program is one of the better ones and of course has proved itself many times over.
Enjoy the weekend.0May 19, 2006 at 6:01 pm #137832
I was just pointing out that there in nothing new in six sigma over and above what an industrial engineer would learn through a degree program. My critism is the ‘industry’ which is often created on teh back of ‘repackaging’ old tools and techniques under new ‘banners’. hey, I am not complaining… just making an observation. Six Sigma is NOT all about certification but the certification is often a requirement with organisations. I have initiated a program with my local university to have all engineering graduates certified in six sigma (like similar programs in India) and other ‘in vogue tools’ rather than expect their prospective companies to fork out money for such programs at a later date.
In fact I have had many successes using standard industrial engineering techniques in process inprovements across multiple sectors. the query on Six sigma certification is really keeping up with what is in vogue and perceptions… which at the end of the day is the only reality.
In any case, nobody is an idiot and when you realise that, then you will realise the full value of the individual and the real source of improvement and innovation within organisations.
Many thanks for your comment which I found insightful… especially on a Friday.
0May 18, 2006 at 5:00 pm #137779
yeah thats true….. but the reason we have heard about enron etc is because of the regulation and they are the ‘exceptions which proves the rule’.
I wonder how mainy failed 6 sigma projects there are?.. all in jest, thanks for the reply.0May 18, 2006 at 4:22 pm #137773
Six sigma is really a ‘repackaging’ of just some of the techniques which any Industrial Engineer would have studied in college. Every couple of years we get a ‘new’ application or methodology to use in industry and many people make a buck out of selling it with some ‘certification’. It may not be a bad thing as it exposes people to what is really an ‘engineering’ domain. However, I do feel that these certificates tend to be ‘in vogue’ and lessen the ‘professional aspect of engineering’ when they fail to deliver. having said all that I will probably follow the rest of the flock and get the certificate…. or maybe I get out of engineering and go into finance which seems to be better regulated.0May 18, 2006 at 4:07 pm #137770
I guess you believe they are not recognised?
Do you know who the ‘licencing authority’ is…?0May 4, 2006 at 7:05 pm #137263
could I too have a copy of the “Integrated Business Excellence Model” ?0March 21, 2006 at 6:40 pm #135308
Elisa – your English is good enough to be understood. Please refer to the “new to six sigma” link on the left side of this screen – it will provide you with the resources you seek.0March 21, 2006 at 3:48 pm #135299
Might I respectfully submit an alternative to your charaterization of the forum? Given that your post was made at midnight in the U.S., perhaps those here were merely sleeping, unaware their unresponsiveness might offend.0March 9, 2006 at 4:25 pm #134837
After checking out the forum for a little over a year now, I remain interested in the recurring theme of certification, the relative value of certification(s) and certfying bodies, etc. But for the seeming endless discussion of shift, certification could be the messiest topic within the six sigma industry.
Cerification messes are not unigue to six sigma. However, it appears some organizations outside of six sigma are making efforts to bring an ISO-like approach to certifying bodies. See, http://www.ansi.org/news_publications/news_story.aspx?menuid=7&articleid=1119. See, too, the DoD’s recent mandate requiring compliance with ISO/IEC 17024, General Requirements for Bodies Operating Certification Systems of Persons, http://www.ansi.org/news_publications/news_story.aspx?menuid=7&articleid=1159. It appears that the standard relates more to the process of administering certifications, not to the merits. I do not know much about these standard setting/auditing organizations, but it seems to me that companies holding themselves out as being ISO compliant would be interested in this personnel certification standard, and that one or more of the current SS certifying bodies might have a comment.
Any thoughts from the collective as to how this might apply to the SS community?0March 6, 2006 at 11:01 pm #134727
You may want to re-check your data on Villanova costs. I understand the GB runs $1980 with BB an additional $3780. They are advertising an introductory special of GB and BB for $5255.0March 1, 2006 at 7:22 pm #134486
LEA – I don’t understand your reference to “the 4 weeks of training.” Regardless, perhaps you should consider a course of study that allows you to be self-paced (cruise through the familiar/slow down for the new) rather than risk a 4 week time period only to cover a lot of old material.0February 24, 2006 at 11:23 pm #134259
Re the late shipment portion of the question: does late shipment result in delayed invoicing and payment? The time value of late payments may underlie your thoughts about increasing cost associated with increasing delay.0February 22, 2006 at 5:52 pm #134127
Gavin – Interesting exercise. Any interest in sharing the title of the game? Thanks in advance.0January 19, 2006 at 3:18 pm #132567
Deba – I recall an article on this topic in ASQ’s Six Sigma Forum magazine about a year ago dealing with trademark management in a corporate legal department. Rini Das was one of the authors, you may want to look her up. Tim0June 20, 2005 at 8:36 pm #121836
I would also like a copy if you could forward this [email protected]
thanks so much. tim0June 8, 2005 at 4:48 pm #120950
Interesting thoughts. Would you please send a copy to me at [email protected]?
Tim0February 18, 2005 at 3:38 pm #115098
Stan – fascinating. thank you.0February 15, 2005 at 3:25 pm #114898
Greg – I am new to SS and find the house of quality to be a challenge. In reacting to you question, I think you may be overlooking features that are now ubiquitous in mobile phones. For example, keypad sizing, backlighting on the keypad, antenna size and performance, etc. Take a critical look at your phone. It seems to me that the list could be quite lengthy. Good Luck.0February 7, 2005 at 12:30 pm #114573
What did you contractually agree to?
The bottom line is this — your company is required to perform “contract review”, including feasibility of meeting the contractual committments such as delivery times. How situations such as described are to be handled should have been defined in the contractual agreement. If your company failed to meet contractual agreements than is should be counted. Corrective action may need to focus on your contract review/acceptance process if not addressed.
0January 19, 2005 at 4:58 pm #113688
Unfortunately I wasn’t able to decipher your response.
I don’t know:
1) Who Wheeler is.
2) The dogma to which you refer.
3) Anything about the IntraClass correlation coefficient.
4) Why the number of distinct categories output by Minitab don’t correlate with the value that Beverly found for #3 (above).
5) Where the terms “better” or “correct” appeared in Beverly’s explanation.
However I would like to know more.
I do know:
1) Beverly’s response was the only reply that truly addressed my question.
2) Her explanation appeared clear and logical.
3) While her method may not be mathematically perfect it certainly did provide a practical approach for solving my dilema.
I would be happy to provide anyone who is interested with the data for the gage R&R study and capability study that were used in my analysis. I would certainly like to learn other ways to solve this problem.
Tim0January 18, 2005 at 7:45 pm #113661
The NDC that I reported came directly from the Minitab gage R&R output (using the ANOVA method). I definitely need to discover where I made my mistake. Your value of +/-.0017 mm works out to 6.8% of the tolerance (0.025 mm) and presents a fifth option with an explanation that I think will satisfy my customer. Your answer was precisely what I was looking for and yes, I need to improve the process. Thank you very much for your help.
Tim0January 18, 2005 at 4:49 pm #113652
Rather than risking another misinterpretion of your “rule of thumb” lets consider the number of distinct categories found during the gage R&R instead. That number is 27. I believe that this is a measure of resolution. It’s arguable that it should be better but in my experience a “shop hardened” gage with this type of performance for a dimension having a tolerance of 0.025 mm is acceptable.
Cp = 1.05, CpK = 0.73 (CpL = 0.73, CpU = 1.38). The histogram of the data set showed me that the mean value wasn’t centered and hence there was an opportunity for improvement. However for this situation even a perfectly centered process will generate scrap.
Whether or not state-of-the-art equipment can achieve a given process capability depends upon a number of factors. I agree that you may be able to generate better process capability for this particular tolerance on a less-than-state-of-the-art machining center. However it depends upon the process. We routinely reach CpK values in the neighborhood of 2 or better with some machining processes. The process in question is a single point turning operation. Because tool wear is appreciable the machine operator must monitor the bore size and adjust the machine accordingly. Although the machine itself can accurately and repeatably put the tool within 0.002 mm of it’s intended position the process (tool wear) and the operator (tool adjustment) add additional variability. Thus the process standard deviation is 0.004 mm. That is the cold, cruel reality.
Given that the process is “not capable” all parts are measured and scrapped when they exceed the upper and lower specification limits defined by the blueprint. The customer wants my measuring system to have a gage R&R less than 10%. It doesn’t. I want a process with a CpK greater than 2. I don’t have it. It will take time to improve the measuring system and the process. The customer wants me to start using “adjusted” specification limits and wants action now. Which option can I credibly argue; 1, 2, 3, 4 or something else and why? And by the way renegotiating the selling price is not an option.
Tim0January 17, 2005 at 9:56 pm #113629
Our PPM is 17,347 (1.73% out-of-spec). The Cp for the process is 1.05. The process standard deviation is 0.004 mm. There is an opportunity to bring the mean closer to the nominal diameter of 54.9035 mm but this will not address the issue in question. There is no room for negotiation with the customer regarding the blueprint specification limits. That is a matter defined by our contract. We are using state-of-the-art CNC turning machines and state-of-the-art cutting tools. A capital investment in more advanced equipment (CNC grinding machines) is simply not feasible. The standard deviation for the gage is 0.0008 mm. According to your rule of thumb the standard deviation of the measurement system is less than 1/4 of the tolerance so the measurement system should have sufficient resolution. In fact the number of distinct categories reported for the gage R&R is 27. I would be concerned if it were below 5 or 6 but it isn’t. I hope that this clarifies the situation.
I would like to be able to use the allowable tolerance but the customer is squeezing us. I think that I can make an argument for reducing the upper and lower specification limits by only 4.8% but I’m just not sure. I thought that others on this forum might have had similar experiences that would shed light on my predicament. That’s why I posed the question here.
Tim0January 17, 2005 at 8:56 pm #113625
I am responsible for supporting the Six Sigma program at all of our plants. Projects are selected based upon financial impact. It has been determined that three most important projects at this time are the two in Mexico and the one in Indiana that I’m presently managing. I’m the company’s only Blackbelt. Projects at the other plants are in the queue. Although I consider my travel time to be unproductive it is necessary for my particular situation. This is about the only time that I’m not actively engaged in project work. I have been asked to complete 4 projects this year for a total cost savings of $1M while training, testing and certifying 8 Greenbelts. I hope to be more efficient in the future as Six Sigma becomes part of our culture but for now I consider what I’m doing to be a realistic expectation. I thought that the original question was asking for an opinion. That’s my opinion; 4 projects & $1M savings/ year / Blackbelt.0January 4, 2005 at 12:39 am #113075
Just ask your customer when you were late. Was it when you didn’t deliver as committed & shut down their line or when you finally got around to shipping the product?
0December 25, 2004 at 1:16 pm #112831
The requirements for early receipt should be defined during the initial contract review. Customers normally allow some window for early delivery. Unless it it cost feasible to track every shpment, product shipped to arrive during that window is “assumed” to be on-time unless notified otherwise by the customer. Any portion not shipped to arrive in the window is assumed to not be on-time. When your company renegotiates a new date for delivery of remaining product unavailable for shipment to meet the original promise date that product must then be shipped to meet the new comittment or it is late again. Best to count against the promise date that is missed rather than the actual ship date.0December 13, 2004 at 11:09 pm #112304
It’s called contract review. You’re required to meet those customer requirements you’ve contractually agreed to (or promised). Although all good companies will do their best to meet all customer desires, existing material and/or process lead times have to be considered (although the company should be continually looking at reducing those lead times).0December 13, 2004 at 11:00 pm #112303
Boy, mis-spell one word and it generates an entire thread of its own.
Tim0December 13, 2004 at 12:00 pm #112267
I don’t know why so many people want to pay games with this. Bottom line is if you don’t get the quantity of parts to your customer on the date promised they are late. If you were supposed to ship and did not, of course those have to be included in the calculation. Here’s the operational definition I use for On-Time.
On-Time Delivery: Product shipped as required to arrive at the specified destination by the original promise date, unless an adjusted date is initiated by the customer or caused by a customer action for which xxx Corporation has no control. Any amount of the order quantity not shipped in time to arrive at the destination by the promise date is considered late (customer notification of a late delivery and re-negotiating a new due date does not negate the original late delivery).0December 9, 2004 at 10:57 am #112090
Thomas, would you share what you determined to be the 11 significant variables?
Tim0September 24, 2004 at 9:39 pm #107990
Why do you think that just because there are less discussion means that no one goes to that channel? I personally am a subscriber to the newsletters of that channel because I’m in IT supporting our finance department. They have good articles and would think it to be very narrow sighted of isixsigma to take away such a benefit just because there is less activity on that particular forum. Wouldn’t you agree?
What you’re saying is rather strange. Let’s take a closer look. If I owned a restaurant in Nebraska (let’s say) and 95% of the customers ordered beef, but there were 5% that ordered fish — why would I stop serving fish and dissatisfy 5% of my customers? Especially because fish eaters usually like fine wine which is a 500% profit compared to the water or miller beer that beef eaters eat? Now I’m poking fun at you, but you get the idea…just because one factor that you’re aware of is not as strong as what you would think to be appropriate doesn’t mean that the entire is bad.)0September 17, 2004 at 10:01 am #107509
AIAG MSA leaves it to the discretion of the evaluator to determine sample size and frequency based on their knowledge of the measurement system. If you are asking how many sub-groups must be collected before control limits can established and decisions made on the stabilty of the process based on those control limits, once again no specific criteria is given (the example provided in the MSA Manual uses 20 sub-groups).0August 19, 2004 at 11:35 pm #106040
You say, “You don’t think that V has returned to Earth do you?”
Always trying sooo hard to hang out with the “cool” group by putting down others (kind of like what kids do in preschool). I guess some of us never grow up even when we get old and bald.
V. will get faster to Earth than you because his ship is much closer!0August 10, 2004 at 8:42 pm #63644
JJ, just curious, in your equation for cost of outage, why is there not a weighting for the cost to the customer? It seems you’ve accounted for internal costs but what about revenue (short and long term) disruption ?0August 8, 2004 at 11:28 pm #105247
You absolutely deserve the title of Chief BIG MOUTH with minimal KNOWLEDGE. Congratulations!!!0August 2, 2004 at 12:18 pm #104870
Why not try for Darth’s place instead of Citi. Just look at him he is a “full time employee” and he is online 24/7. How hard would it be to do his job?0July 30, 2004 at 12:47 pm #104720
You should not put Stan and Darth in the same group. At least, Stan has some Six Sigma knowledge and experience. He is NOT all talk!0July 29, 2004 at 5:09 pm #104649
I am not a bit surprised about your post. All you do is just talk. In your case, PHD stands for Piled Higher and Deeper.
PS: any misspelling?
0July 29, 2004 at 2:06 pm #104619
I would like to know how that “Dr himself…PHD….” can come up with such a strong conclusion? Has he worked in any deployment role or any role other than teaching to make a conclusion like that??0April 29, 2004 at 2:55 pm #99420
No shot intended. I just though you knew something about air flow and saw some hints in the post I didn’t see.
I also thought you might be pointing out that with small numbers of factors the response surface design was nearly as efficient, in terms of number of runs, as the full factorial design – which is NOT pointed out in most DOE classes.0April 29, 2004 at 2:17 am #99376
Knowing simply that the response variable is airflow, I’m curious how can you conclude that all three factors are signficiant without collecting any data?
Or was your point along the lines of a 3-factor screening design takes 8 runs + centerpoints + replicates, which could easily add up to 18 or more runs, while a 3-factor central composite design would typically call for 20 runs and therefore call for fewer runs? And, that both would provide ANOVAs capable of testing main and interaction effects?
If runs are inexpensive, then considering this is a first DOE, my tendency would be to run a full factorial 3-factor design with settings as follows:
Diam1: Lo= 1.7 Hi=2.5Length: Lo=4 Hi=10Diam2: Lo=2.8 Hi=3.6
Carefull consideration need to be as to the viability of the extreme combinations of the factors. Will a unit running with Diam1=1.7 and Length=4 and Diam2=2.8 have sufficient function to get a decent measurement? If not, then you mgiht have to pull in the low and highs.
Per a previous post, you also have to consider the ability of your measurement system. Can you adequately measure the differences in air flow relative to the natural variation in the process? Usually your measurement precision should be about 1/10th or less of the process variation. If not, changes in air flow responses from the various DOE’s treatment combinations may not be detectable.0April 29, 2004 at 1:57 am #99375
If you are testing for differences between means of several groups, the ANOVA is fairly robust against nonhomogenious variances, especially if the sample sizes for each group are nearly equal.
If the sample sizes are not nearly equal, then you have the option of using weighted least squares regression. The most common technique involves using weights equal to 1/stdev, where stdev represents the standard deviation for each of the repective groups.0April 29, 2004 at 1:45 am #99373
What Zorba needs to do is provide a more detailed description of exactly how the data are to be collected. With all due respect, a table of the data usually provides only half the story.
Are these, say, 30 units that are measured after the first process, and then measured again after the second process? In this case the two processes are applied to the very same units, so the data are not statistically independent, which calls for use of a paired t-test — which is simply a one-sample t-test of the differences between each “natural” pair. Does it make any sense to calculate the difference between a unit’s value after process 1 and process 2? ANOVA is not applicable in this case..
OR are you going to take 30 unit, subject them to process 1, then measure them, and then to this again with 30 completely different units subjected to process 2? In this case the two samples are statistically independent of each other, so a two-sample t-test (or an ANOVA) should be used.
Anytime you analyze data there are two key characteristics of the data that are critical – the data themselves – often showing relationships via a table, and the method in which the data were collected. Independent data values which call for a two-sample t-test can appear very similar to dependent data values that require a paired t-test.0April 8, 2004 at 6:25 pm #60087
You should start by reading the left hand side of this web page (“New to Six Sigma”). You might also want to buy a book or two. If you understand nothing about process management and process improvement then six sigma is not right for you yet. There is much to be learned first. Good luck.
Tim0April 7, 2004 at 4:24 pm #98144
Are you using Weibull++?
Their web site provides a wealth of information, but it doesn’t provide much advice on your concern.
Minitab statistical software offers excellent advice on the topic, but since it is probably copyrighted, I don’t think I can copy/paste it here.
Basically the idea is that the LS method is better for small sample sizes (n<30-50) or when there is very heavy censoring. The MLE method overall provides better estimates, but doesn't do so well when sample sizes are small.
I never did understand why Weibull++ offers regression on X. Maybe someone can explain what that is used for.
When I took a reliability training from Bill Meeker, who co-wrote a very popular book on the subject, my impression was that he was a VERY big fan of ML estimation.0March 28, 2004 at 9:22 pm #97487
If using Minitab, you could also look at a Marginal Plot using histograms. This gives a scatterplot with histograms on the margins.
If the scatterplot hints of a relationship, then you can use regression techniques to model the relationship and determine if that model is statistically significant.0March 15, 2004 at 5:03 pm #96913
When referring to Cpk, Minitab, and most others these days, are referring to a metric that essentially “averages” the variation in a series of subgroups and ignores between-subgroup variation.
Minitab offers several methods of “averaging” the variation: >The Rbar method estimates a standard deviation based upon the average of subgroup ranges. This is the default, since it matches the methods given by the AIAG Measurement System Reference Manual. it matches the methods used in a range control chart.>The Sbar method estimates a standard deviation based upon the average of subgroup standard deviations. This matches the methods used in a standard deviation control chart.>The Pooled standard deviation method estimates a standard deviation using a degree of freedom-weighted average of the subgroup variances. I think of this what a statistician would have done – not that the other methods are wrong.
Now, if you tell Minitab that the subgroup sample size is 1, there is really no way to calculate ranges or standard deviations for a sample size of 1. So, Minitab makes an assumption that the data are entered in the order that they were collected and that adjacent data values are more like each other (less variation) than data values that are far from each other. By default, Minitab calculates the within standard deviation using moving ranges of size two.
The moving range of size two is as follows: R1=X2-X1, R2=X3-X2,… By default Minitab uses the average of the moving ranges, but you are given the option to use the median or the square root of the mean squared successive differences.
So, if you enter a subgroup size of 1, Minitab provides an estimate of Cpk using moving averages of size 2. If you enter a subgroup size of n (the full sample size), Minitab’s Cpk will equal Ppk.
Which is correct? Well, that is up to you. My advice is to enter 1, but use the Cpk with a bit of caution, knowing that it is not a “true” estiamte of Cpk, but an estimate of Cpk using Minitab’s moving range method. In this case, if Cpk is close to Ppk, that suggests the variation between neighboring data values is similar to that of the full data set.
If that is too much too confusing, unless you have “real” subgroups, then just enter a subgroup size of n – the sample size of the full data set.0March 9, 2004 at 2:25 am #96592