# Kim Niles

## Forum Replies Created

Viewing 100 posts - 1 through 100 (of 113 total)
• Author
Posts
• #235784

Kim Niles
Participant

Just brainstorming… I’ve coached a few military oriented Lean Six Sigma projects and always had to chuckle regarding the value saved. Projects that redesign the system so armored tanks wouldn’t go missing… saved \$200MM, etc. Perhaps your art could utilize pictures of large pieces of equipment that’s been found in some way… not sure. Good luck.

0
#167824

Kim Niles
Participant

Kiran:
Good question, I had a statistics teacher answer this question once by telling me that correlation analysis is always through visual interpretation and regression analysis is always mathematical.   I’ve stuck with that definition even though I’m not very comfortable with it.  I hope to read more posts on this string.
KN  – http://www.KimNiles.com

0
#127447

Kim Niles
Participant

Nats:
I just want to be clear that “replicates” are defined as repeat experiments whereas “repeat measures” are defined as multiple samples per run.  Perhaps you’re just getting caught up in definitions.
In this case, you need to measure variation in the “repeat measures” in order to compare variation in the factors.
KN

0
#127445

Kim Niles
Participant

Nats:
ANOVA is in essence a signal to noise measurement used to provide confidence (variation based) in factor effect (factor means) measurements.  It doesn’t compare variation relative to the factors to that end.  I believe it was Taguchi who outlined using run related signal to noise measurements and or standard deviation / variation as you stated you’ve been taught.

ANOVA assumes that the data comes from a normal distribution.  Multiple samples are taken per run in order to utilize the central limit theorem and assure that the data follows a normal distribution.

I hope that helps.

Sincerely,
http://www.KimNiles.com

0
#112213

Kim Niles
Participant

Thomas:

Three thoughts, one is to make sure you explain what will become the assumptions you use in your final report so you can later go back if you want and analyze the effects of temperature changes.  For example:
–          One hour is assumed to be sufficient for temperature change stabilization.
–          The split plot approach was chosen as it is assumed to sufficiently randomize temperature change stabilization error for our application.

My second thought is for you to note that the ideal injection molding response variable would be the viscosity of the injection liquid as it cools.  Since you are not likely measuring that, you should know the correlation of the response to the control which may be different for each of the 11 factors and not necessarily high.

My third thought is that from experience, I have found anything temperature related to be extremely significant with injection molding experimentation (including very significant interactions).  Time related variables are only very significant and pressure related variables are mildly significant.  Along these lines, I’ve also found results to be rather linear and obvious after you’ve run a couple experiments such that prediction and or extrapolation becomes more reliable than with other types of processes.

Good luck with it.
Sincerely,
KN – http://www.KimNiles.com

0
#109745

Kim Niles
Participant

Neel:

Regarding subjectivity, one of the main reasons why Six Sigma works is that it attracts top management.  Top management tends to react to concrete financial figures.  Therefore, it can be argued that Six Sigma should remain somewhat subjective as its success lies in part to its ability to expose subjective costs that always do exist in a company but were not seen under other initiatives.

Regarding preventive action projects, there are no restrictions on what a company must choose for its Six Sigma projects.  I argue that companies at three and four sigma have very obvious problems that need attention before thinking in terms of preventing other problems from popping up.  Along these lines, one would tend to think that companies operating at higher sigma levels would then have more and more projects that are preventive in nature.  Also don’t forget that Reliability oriented projects are preventive in nature.

Good luck with your class.

Sincerely,
http://www.KimNiles.com

0
#107501

Kim Niles
Participant

Thanks Bill and Robert:

Interesting conversation.  One quick thought I have related to problem solving is that benefits are usually obvious so the trick becomes developing a test strategy / test plan that justifies the effort.  Also, remember that the benefits are not always known in advance such as when new discoveries are made or factors suspected to have little effect actually have big effects and visa versa.

Good luck.

Sincerely,
http://www.KimNiles.com

0
#106754

Kim Niles
Participant

Dear JAG:

One more thing to suggest that I don’t see mentioned in other posts is to prioritize your 10 factors and fix in place as many as seems reasonable to the team before you start your DOEs.  This is assuming your goal is to produce evidence of a stable process for selling that product.

I like to adopt a RPN prioritizing system common in Reliability FMEA programs where I rate each factor from 1-10 for ease of obtaining data, estimated importance of obtaining that data, and expected significance relative to the factor ranges available.  When those three team based best guess numbers are obtained, I multiply them together to get one priority number and then sort the list by that number.

Once the list is prioritized, test only the top 2-3 factors while holding everything else as constant as possible and you will often find you are already to a point to justify selling the product as you only need to test until your experimental error rates are low enough to justify a reasonably controlled process (given assumptions of long-term variation control).  Of course as Taguchi, Deming, and others points out, long term process understanding and improvement efforts are justified to continue forever.

Good luck with it.

Sincerely,
Kim Niles
https://www.isixsigma.com/library/bio/kniles.asphttp://www.kimniles.com/

0
#103002

Kim Niles
Participant

Dear Tim: You’ve got some great ideas here and I’d follow them first.  However, after you’ve performed multiple regression analysis and other forms of plotting to highlight what controls your situation, you might want to try to generate a mixture DOE from passive data.   It might be tricky to find representative data but if you are able to do that and if there are only three key factors that affect your situation then you’re end result will be a nice plot that will allow you to predict based on those three factors how much work to expect on a daily basis.  I’m going to attempt to attach an example plot in a MS Word document.  See attached if I’m successful. Sincerely,Kim Niles http://www.KimNiles.com https://www.isixsigma.com/library/bio/kniles.asp Attachment(s): MixturePlotExample.doc

0
#102560

Kim Niles
Participant

Dear J.K:

I like Michaels idea of performing a DOE on the whole process but would suggest you test within acceptable tolerances on production with every run being one entire day’s worth of production.  Your response will simply be the quantity of special rejects for that day.

Since you don’t have a clue as to what causes the special reject, it’s likely caused by an interaction and therefore, DOE’s are the tool to use.  If you test within normal operating tolerances then all good product would be acceptable and since you are running thousands of parts per run, even very slight changes to key process variables can be measured.

Good luck,
Sincerely,
KN – http://www.KimNiles.com
https://www.isixsigma.com/library/bio/kniles.asp

0
#101129

Kim Niles
Participant

Harry:

You might want to read this site at http://www.math.toronto.edu/mathnet/questionCorner/geomean.html where you will see how the geometric mean answers the question, “if all the quantities had the same value, what would that value have to be in order to achieve the same product?”.  Since the arithmetic mean answers the question “if all the quantities had the same value, what would that value have to be in order to achieve the same total?”, one would tend to believe your situation is properly using the geometric mean as I understand it.

Good luck either way.

Sincerely,
https://www.isixsigma.com/library/bio/kniles.asp
http://www.KimNiles.com

0
#100549

Kim Niles
Participant

Dear John, et al.:

This is a very interesting conversation to me as not much is written on the 5 sigma wall and or obtaining very high sigma levels.

Since any process sigma can also be considered to depend upon how close the outer edges of the process distribution (what is) comes to the specs (what should be), achieving high levels of sigma must depend upon the following factors I envision as follows:
1-     Product or service complexity (process improvement flexibility),
2-     Process improvement resources (realistic ability to improve processes),
3-     Customer and market demands (design flexibility),
4-     The company’s design resources (realistic ability to design in robustness).

I welcome other thoughts along these lines but suggest that if a company wants to achieve high levels of sigma, they must first fully address these four factors.

Sincerely,
http://KimNiles.com
https://www.isixsigma.com/library/bio/kniles.asp

0
#97906

Kim Niles
Participant

Gordo:
Here’s a couple points of advice:
1.      Make sure you watch what happens as possible.  I’ve performed similar experiments thinking those taking the data are well trained only to find out they mixed the samples in the bags.
2.      Throw in one variable that you already have a good feel for as possible.  That way if the results for that one variable look strange, then you’ll know something is wrong.
3.      Create pretty plots as possible as they help sell subsequent experiments and or the results.
4.   Perform confirmation runs after the experiment.
Good luck either way,
http://KimNiles.com

0
#97762

Kim Niles
Participant

Dear Andy:

Since no one has commented I’ll give you my thoughts.  Take them with a grain of salt … :)

Regarding your first sentence, as I understand your question, DOE’s measure these issues as error and or how well the data fit the model depending upon the type of problem.  When the R^2 value is low then your data did not fit the model in some way such as when it is non-normal, etc.  When error is high then the changes you made to the control factors you’re testing either had little effect relative to other changes naturally taking place (uncontrolled), were not consistent as you suggest, weren’t tested over a large enough range, etc.

I have read about using process capability analysis (i.e. CpK) on samples of CpK values … which I think answers your second sentence question even though I’ve never done it.

Regarding your third sentence question.  I believe that averaging replicates is the same thing as having no replicates and using an average value per run.  This results in normalizing the data in accordance with the Central Limit Theorem and therefore might work as you suspect if the error in the data is affecting normality.

I hope that helps.

Sincerely,
http://KimNiles.com
https://www.isixsigma.com/library/bio/kniles.asp

0
#96296

Kim Niles
Participant

Vincent:
Context based questions always peak my interest … .
My answer to your question is “yes” when the context surrounding the sentence supports it and “no” when it doesn’t.  The word “Factor” for example could be referring to a control factor or a response factor … big difference.
Does that make sense?
KN – http://KimNiles.com

0
#93500

Kim Niles
Participant

Dear MLT:
Improving anything costs money or other resources and therefore should take place in accordance with need.  There is a spectrum of different types of effort that might be required from individual task levels to full time sequestered team based efforts.  Six Sigma projects are somewhere in between.

Each level of effort has an appropriate use depending upon who is concerned and by how severe the concern or expectation for success is.  Examples include individual efforts, department to do lists or ticket based systems, working group efforts (low priority improvement teams), task forces (high priority teams), Six Sigma projects (some full time some not over a several month project), Kaizen Blitzes, or Events (full time sequestered but only for a couple days).

I hope this helps provide general areas where Six Sigma projects apply and or don’t apply.

Sincerely,
KN – http://www.KimNiles.com &#8211; https://www.isixsigma.com/library/bio/kniles.asp

0
#93499

Kim Niles
Participant

Dear Jason:

Since no one else has commented, I’ll throw in my 2 cents worth but take it with a grain of salt as it’s not within my area of expertise.

Sub-groups work via the central limit theorem in that they don’t affect statistical confidence for most statements formed from groups of those types of measurements, only the measurement accuracy assuming that the underlying distribution is not perfectly normal.

Regarding selecting subgroups, that’s really a different subject that depends upon what you want to understand / measure and how non-normal you think the data might be.  Using subgroups of three (one per shift in your case) would have the effect of normalizing the overall process and give you the best overall accuracy for any general statements you might form around your process CpK.  However, what if one shift is producing more problems than another?  In practical terms, as I understand your situation, if you suspect that your data does form a normal distribution, you might be better off forming and comparing three different CpK values, made from individual points, one for each shift.

I hope that helps.

KN – http://www.KimNiles.com &#8211; https://www.isixsigma.com/library/bio/kniles.asp

0
#59962

Kim Niles
Participant

Dear Au:
It doesn’t sound like you are in a position to implement RTY.  You would appear to be inspecting in quality as opposed to using inspection to assure your process is producing quality product.
RTY could help you understand where your rejects are really occurring but you would need to inspect at all the areas you want to understand.  If that isn’t possible then RTY is not a useful tool for you.
Remember that 100% inspection is only about 80% effective… so with that in mind, I suggest you perform a cost justification analysis to run DOE’s, special test runs, or special inspection on a few lots of material.  Depending upon what kind of reject rates and related costs of those rejections (including soft costs), it may be critical that you get a better understanding of where your rejections are occurring and what you need to do to improve your process or you may be worried about it when you don’t need to worry.
Good luck either way.
Sincerely,
KN – http://healthcare.isixsigma.com/library/bio/kniles.asp, http://www.KimNiles.com

0
#91856

Kim Niles
Participant

Dear CSSBB:
Good questions, here are some quick thoughts.  DOE’s are best applied where continuous variables and non-obvious interactions likely confound a situation.  To that end, they never seem to be applied enough in my opinion.
It’s an ever present fight to get “process experts” to truly understand the power of DOE and see where they should be run when those experts don’t think it should be needed.  However with that said, transactional processes don’t have as much potential to yield the same benefits as other types of processes even though I’ve seen truly surprising and successful case studies performed on transactional processes.
I hope that helps.
Sincerely,
Kim Niles – http://www.kimniles.comhttps://www.isixsigma.com/library/bio/kniles.asp

0
#59960

Kim Niles
Participant

Dear Ty:
I’m not quite sure where your frustration is.  Perhaps you are calculating RTY based on the ten defect types as opposed to the different spots where a part can be rejected?  RTY metrics are best applied to determine how many (total) and where in your process your rejections are occurring.
I use RTY all the time on projects where I work.  It really puts things in perspective.  Every step may have very high yields but the end result is that all the little amounts of rejected parts at every step can really add up.  Without RTY metrics, managers can get complacent and think everything is fine.  With RTY metrics managers can quickly see where the real problems lie.
Does this help?
Sincerely,
KN – https://www.isixsigma.com/library/bio/kniles.asp, http://www.KimNiles.com

0
#59959

Kim Niles
Participant

Dear Brian:
I maintain a living article on the subject for ASQ San Diego at http://www.asqsandiego.org/articles/cpk.htm.  I hope this helps.  If you see problems or additions you can make please let me know about them as it is a living article.
Sincerely,
KN – https://www.isixsigma.com/library/bio/kniles.asp, http://www.KimNiles.com

0
#91266

Kim Niles
Participant

Dear Verity:
Injection molded part strength factors that really stand out are related to creating splay or weld lines.  Both of these can be optimized and may appear to come and go depending on what you have in control.  Splay is more related to moisture in your mix.  Weld lines are more related to cold fronts coming together.  Make sure you cover those two somewhere in your study.  You also might want to review my article on DOE as I saw your other posts that it addresses.   See https://www.isixsigma.com/library/content/c030616a.asp and good luck.
Sincerely,
KN – https://www.isixsigma.com/library/bio/kniles.asp  – http://www.KimNiles.com

0
#90595

Kim Niles
Participant

Dear Kraig:
You are on the right track by purchasing “World Class Quality”.  Keki Bohte told me personally during a class I took from him that Dorian Shainin licensed him alone to write about his techniques.  This is another reason why you are having problems finding information.
Regarding other comments on this thread, that suggest these techniques are out of date, I agree that they go back a few years but I disagree with regard to improving supplier quality and occasional use.  I have been in several situations where we desperately needed to improve supplier quality and the supplier was desperately afraid of using statistics.  In those situations, I found Shainin techniques very helpful.  I have also used the techniques in house on occasion where I wanted to use very simple graphs to explain variation.
Good luck with your studies and getting that job you want.
Sincerely,
KN –   https://www.isixsigma.com/library/bio/kniles.asp ; http://www.KimNiles.com

0
#90414

Kim Niles
Participant

Dear Stacy:
One of the prerequisites for MSA is having a stable process.  It’s hard to tell from your email but I’d guess you don’t have one.  However, I’d also guess that there are other ways to accomplish what you want to accomplish (assure high confidence that you can hire, train, and manage workers so as not to affect production quality right?).
Make sure you’ve characterized the process fully so that anyone can fully understand ownership, SIPOC, flow, step criteria, etc.  Make sure you’ve measured all the key steps against specs or criteria.  Make sure you understand the effect of all key variables that can change (temporary vs. permanent, shift differences, etc??).  Make sure you’ve documented and trained against all the key aspects of the process.
I hope that helps.
KN – http://www.KimNiles.com

0
#90066

Kim Niles
Participant

Thanks Dr. Burns:

I believe we are thinking on parallel paths, just offset a bit from one another due to our vantage points (context).

With regard to your number one, I should have chosen a different word other than “set” as I meant what you stated.  My mistake.

With regard to your number two, again I agree but add that it all depends upon the context of the situation.  What does implementing a quality program really mean?  One person, a team, 50 QE’s, etc.?  How many significant control variables are we talking about?  How much drift is there?  How much does it cost to control that drift relative to profits made with the drift “as is” (including soft costs)?

With regard to your number three, again I agree relative to using SPC charts to monitor a process.  However, after considering different context as outlined in the paragraph above, my statement of “…the effort it would take to understand and manage all the control variables” can have a lot of different connotations.

Along these lines, regarding your statement of “World Class Quality = On Target with Minimum Variation”, that’s a great general statement that also depends upon the context surrounding that statement.  What does “minimum variation” really mean?  How much does it cost to reach that “minimum variation” or  “World Class Quality” relative to long term profits made (including soft costs)?  Of course obtaining “World Class Quality” doesn’t mean that all other aspects of business performance are compromised.  In fact, they all must work together in optimized harmony to obtain “World Class Quality”.

Does this help?

Sincerely,KN – https://www.isixsigma.com/library/bio/kniles.asp, http://www.KimNiles.com

0
#90045

Kim Niles
Participant

Dear Reigle:

I’ve really enjoyed reading your posts and wish to thank you for taking the time to make them.

Secondly, I see you worked with Dr. Harry in developing your book “Six Sigma Mechanical Design Tolerancing” but the book appears to be out of print.  How can I get a copy?

Sincerely,
KN – https://www.isixsigma.com/library/bio/kniles.asp, http://www.KimNiles.com

0
#90042

Kim Niles
Participant

Dear Dr. Burns:

Regarding your first scenario (7% of points (depending on the shape of the distribution) to fall outside a control limit) … then I argue that you have your control limits set too close together … because the process is likely to shift if it contains more than just a few control variables and given enough time.  From my experience working with 30+ significant control variable manufacturing processes, they all shift given enough time.  All 30+ of those significant control variables are tugging away at the mean in their ever-present quest to reach entropy … [smile].

Reviewing your second point (shifting due to tool wear) helps me explain my line of thought.  I agree with you as I understand your point that shifts due to tool wear are manageable and therefore are more of a quality control problem than a natural occurrence.  However, I go on to say that most likely all 30+ significant control variables in my stated typical manufacturing process are also manageable in the same way and therefore at some point we have to consider shifting a noise based natural occurrence relative to the effort it would take to understand and manage all the control variables tugging away at that mean.   Practicality begets noise, which begets the 1.5sigma shift.

Sincerely,
KN – https://www.isixsigma.com/library/bio/kniles.asp, http://www.KimNiles.com

0
#89783

Kim Niles
Participant

Dear Elena:
We in ASQ San Diego are working on developing a related set of distance education courses that include Six Sigma training in support of the ASQ certification.  While we haven’t actually conducted a course yet, we have developed and posted a list of advantages and disadvantages of what we call e-courses at http://www.asqsandiego.org/news/may03edmail.htm#edmessage as follows:
Advantages of E-courses

E-courses will be less costly to participating students. No transportation costs.
Overall it will take less time to participate. No driving, parking, or walking times, etc.
E-courses offer more flexibility to students e.g. students can take a “class” during lunch hour at work. If a student is sick, he/she does not have to miss a class,
E-courses force students to be more independent and proactive in pursuing their study goals.
By posting/reading questions on the course web page, participating students often learn more from the other students than would be the case in normal classes.
E-classes are usually more structured. Instructors who teach E-courses tend to follow the course book much more closely than they would do during in-class instruction.
We may get student enrollments from other ASQ sections all over the world.

Disadvantages of E-courses

There would be no real in-class lecture. Reading several pages of an instructor’s thoughts and explanations may not be as clear to the students as a combination of verbal and visual instruction.
Very limited instructor-student and student-student interaction. Students do not get to know the other students or the instructor as well as in-person classes.
The instructors can’t give immediate feedback. Students have to e-mail or post a question on the web page and then go back the next day for the answer.
E-courses may be perceived by the students as getting “less value for their money” compared to in-class courses.
Occasionally log-in or computer problems may hinder students to get on line.

More specifically related to your question, another foreseen disadvantage would be in getting proctors to verify or validate the student’s project.

I hope this helps.  I look forward to reading your results.

Sincerely,
Kim Niles – http://www.kimniles.comhttps://www.isixsigma.com/library/bio/kniles.asp

0
#89418

Kim Niles
Participant

Dear Markus:

Here are some related bits taken from my CSUDH MSQA thesis entitled “THE IMPORTANCE OF Metrics ON SIX Sigma Program IMPLEMENTATION”.

+~+~+
The 1.5 Sigma Shift Allowance is a way of approximating and accounting for long-term variation of the mean of a process metric.  It also conveniently allows Six Sigma targets to appear as more realistic goals (i.e. 3.4 dpm vs. 0.002 dpm).  Mikel J. Harry, the President of the Six Sigma Academy, is credited as being instrumental in the development of Six Sigma and the 1.5 sigma shift (Maguire, 1999, p.27-34).

There are two main ways that Mr. Harry attempts to support the shift statistically.  Both ways are highly controversial.   Mikel Harry sites several old articles on statistical tolerancing and process shifting as a method to justify the shift.  Harry quotes Bender (1975) and Gilson (1951) to support this position (as cited in Harry, 1997).  An article by Tadikamalla points out that Harry is taking these articles out of context.  Tadikamalla states that shifts in the mean of a stack of disks would be expected to shift given the various disk sample combinations in the stack.  However, he says, “Harry seems to have misinterpreted the factor of 1.5 as the allowance for the shifts in the mean of a single component due to its being manufactured in different lots.  It does not make sense, to allow for a 1.5 sigma shift in the process mean of individual components” (Tadikamalla, 1994, p. 83-85).

The second slightly different and also misleading method used to support the shift is in regards to varying sample sizes and shifts in distribution normalcy.  Smaller sample sizes result in apparently larger variation and greater reported shifts even if the population universe does not change.  Harry (1992) refers to Evans (1975) to justify the 1.5 sigma shift in this manner.  He quotes Evans:  “…. shifts and drifts in the mean of the distribution of a component occur for a number of reasons…for example, tool wear in one source of a gradual (nonrandom) drift…which can cause (nonrandom) shifts in the distribution.”  Harry suggests that a generalization can be made and develops a mathematical equation for “the magnitude of inflation imposed on the instantaneous reproducibility with a compensatory constant used to correct the sustained reproducibility for the effect of nonrandom manufacturing errors, which perturbs the process center.”  He claims that the general range of the compensatory constant is between 1.4 and 1.8 and uses Z shift = SQRT {[c^2(ng-1) – g(n-1)]/ng} where c= his constant, n = subgroup size and g = number of subgroups; n is usually between 4 and 6, and g between 10 and 100.  Using c = 1.8, n = 5, and g=50 results in a Z shift of 1.49.  He calls this the standard mean shift correction.

There are also motivational and practical reasons to support the shift.  Gary Wasserman (2000) is one of several that believe that in the real world special cause variation is the norm, data is often serial correlated and or non-normal.  Therefore, the shift tends to negate some of these real world data integrity problems (p.23).

Gregory Watson (2000), ASQ president in 2000 states in regards to the controversy over the statistics behind Six Sigma that there are three groups of thought, those in favor, those against, and now a new group of those taking the middle ground.  He says that those in the middle see the controversy but recognize that management is paying more attention to Quality.  He says, “It is acceptable to have a non-purist approach to statistics that gets results quickly, if that is management’s decision.” (p.16).
+~+~+

I hope that helps.

Sincerely,
KN
http://members.cox.net/asqsd/kn/index.htm https://www.isixsigma.com/library/bio/kniles.asp

0
#89171

Kim Niles
Participant

Thanks Jeffrey:
Interesting article.  I was most surprised to see the last point as while that’s always been my approach (win over the masses with results vs. push for company wide training), I’ve never been in a high enough position to “do the right thing” in what I’ve read to be to train the masses first.  Perhaps I’m missing the point?
I will admit that many books written on change management and implementing Six Sigma are written by consultants that are biased in that they would love to be able to train a whole company.  So why are you not biased in this regard?
KN –  https://www.isixsigma.com/library/bio/kniles.asp

0
#89112

Kim Niles
Participant

Great question Charles:
Certifications are:

some form of support for what we do and say
a reward for some form of efforts made
a cheerful reminder of some success in our area of interest
a nice looking picture on the wall (art)
a symbol of hope for our future
a requirement for gainful employment
a form of benchmark for our profession
a sales tool used to impress the customer as he / she enters your office
an example of proof to show our parents that we lived up to their expectations
a statement we make to our kids of what success is all about
Like everything in life, there is variation in certifications such that some are hard to get, some easy, some costly, some nearly free, some highly correlated with abilities learned, some not, etc., but they are all valuable in some way or the other.
You brought up another excellent point, why don’t we have CEO certifications?  Don’t we think CEO’s should have proof of anything other than some form of prior success and or financial aptitude?  If only a small percentage of these recent Enron want-to-be CEO’s had prior basic ethics training, think how much better the world would be today.
Sincerely,
KN – https://www.isixsigma.com/library/bio/kniles.asp

0
#88964

Kim Niles
Participant

Dear Paul:
FYI, I’ve experienced your type of situation many times before and have found that almost all of the time, the best thing to do is to work on the response.  It’s rare to have a situation where only discrete output can be used.  There’s almost always some form of variable that could be used, either by focusing in more on the output itself (i.e. finer measures), thinking outside of the box with regard to tabulating the output (i.e. percentages), or through correlation with some other form of variable output (i.e. end use temperature).  Think about it, if you were “all seeing”, you could probably find a million differences between OK and NG (no good) parts.  On the molecular level they must be very different to varying degrees.  They likely affect or could be found to affect something else to varying degrees.  There must be different ways to look at the big response measurement picture.
Think why is it good or bad?  Perhaps something else such as end use temperature or color correlates well to the discrete good / bad?  Perhaps percentages of good / bad in a sample can be used?  Perhaps you can focus in on how good or how bad good or bad really is?
Good luck either way.
KN –   https://www.isixsigma.com/library/bio/kniles.asp

0
#59915

Kim Niles
Participant

Dear Rocket:
I would imagine that by now (4 months after your post) you might be finished with your project.  If so, how did it go?
My thoughts regarding your question are that if the process is in place and working you should use DMAIC as DMADV is for development mainly.  Also, improvement projects tend to follow the Pareto principle such that about 80% of the improvement is made near the beginning as low hanging fruit is plucked … so I bet the answer you sought was somewhere in the middle.
Sincerely,
KN

0
#85523

Kim Niles
Participant

Dear JD:
Excellent question.  I’d like to present a different point of view than what I’ve seen posted.
How many times have we gone into a store and bought something we didn’t know existed or that we didn’t know we needed until we saw it?  It happens now and then.  Therefore, now and then there is obviously no correlation between VOC and innovation because now and then we as customers don’t even know in advance what we want or need.
Simple math tells us that by removing “now and then” from all of the time that innovation occurs still leaves us with a vast majority of time when VOC does correlate to some degree.  We can then go on to show via numerous case studies published in the literature that at least a large portion of the time VOC is critical to innovation.
I hope that helps.
KN – https://www.isixsigma.com/library/bio/kniles.asp

0
#59853

Kim Niles
Participant

Dear Leon:

One more point of clarification more specifically related to your question is that covariates in DOE are uncontrolled variables that influence the response but do not interact with any of the other factors being tested at the time.  Therefore, if they are present during the experiment then they would show as measurements of error.

I hope that helps.

KN http://healthcare.isixsigma.com/library/bio/kniles.asp

0
#59852

Kim Niles
Participant

Dear Leon:
Your title would appear to be a mistake, as your DOE question does not appear related to TRIZ.
Regarding your question, covariates are random variables you treat as concomitants … or in this case other influential variables that also affect the response.  Using DOE, we can deliberately measure the affects of those variables or if we are unable to control them during the experimentation process, we measure them indirectly as measurements of error in our experiment.
I’ve never heard of distinctions being made between “covariate DOE” and or “regular DOE analysis”.  I’d be interested in knowing more regarding what brought about your question.
Either way, I hope that helps.
Sincerely,
KN http://healthcare.isixsigma.com/library/bio/kniles.asp

0
#84576

Kim Niles
Participant

Dear Newbie:
There are many reasons for choosing a Black Belt candidate and you might or might not fit the pre-planned criteria as well as others around you.  Therefore, all you can really do is to communicate your understanding of and interest in becoming a Black Belt.
However, Six Sigma is about improving the bottom line so if you can state your desire it in similar terms, such as by pointing out a high dollar need in your area of expertise that you would like to use as your Black Belt project, then you may be able to create your own ROI (return on investment / justification for training) and give yourself the advantage over other potential candidates.
Good luck either way.
KN https://www.isixsigma.com/library/bio/kniles.asp

0
#59805

Kim Niles
Participant

Contentental Rehab Health Hospital in San Diego is reported to be using Six Sigma. See: http://www.continentalrehab.com/Production/RehabHospitals/r_contnentl_home.asp
KN

0
#81505

Kim Niles
Participant

Good question Dave:
There is no regulatory body that approves what is or isn’t a Six Sigma company. Therefore, the only obvious criteria for being a “Six Sigma company” is to say that you are one.  Beyond that it’s all assumptions.
Until recently the only criteria for being a Black Belt was to say you were one…. things change, slow but sure.
KN

0
#81296

Kim Niles
Participant

Dear Billybob:
I like your posts. Regarding this one, globally and generally it means that you can’t say the interactions are significant with more than 90% confidence. This may mean that:

The test ranges you used were not large enough to show greater differences that might exist relative to error.
The interaction really isn’t important.
Your alpha limit is set too high.  When making a binary decision (i.e. turning a knob left vs. right), anything over 0.5 is good information assuming no other information exists.  If your p value = 0.2 then you have 80% confidence that your interaction is significant and the remaining 20% confidence could go either way (assuming an equal distribution of error).  Therefore, 80% +10% = 90% confidence that you should move in one of the two directions as directed by the DOE.
Other things that you didn’t control changed during the experiment and affected your results. This would be reflected in the amount of error in the experiment.
The data was not very normally distributed to the point that it adversely affected your results. You can check for normality as well as the R^2 value shows you how well the data fit the mathematical model used.  You can also increase your sample size per run to take advantage of the Central Limit Theorem.
The matrix you used didn’t have enough degrees of freedom to isolate significance relative to error / residual variation. You would have seen more significance if you added center points, replicated or augmented the experiment, or picked a matrix with more power.
I hope that helps.
Sincerely,
KN – https://www.isixsigma.com/library/bio/kniles.asp

0
#81107

Kim Niles
Participant

Dear Dan:
Good questions but I’d be surprised if you get any response.  Let me see if I can’t stir up the pot for you (smile)…
I try to keep my finger on the pulse of Six Sigma here in San Diego and so a semi-wild guess for our area is ~10-20 of ~75-150 large companies.  Therefore, based on these figures one might wildly estimate that ~17% (~7% to 27%) of all large companies in America have switched so far.  I’ve made a ton of assumptions here, one big one is that all large companies are alike which I know isn’t the case.  I believe San Diego tends to have a lot more small companies.
Of course, lots of companies are considering it and we are now seeing all types of industries making the switch.
I hope that helps bring in some real answers.  Good luck either way.
KN

0
#80404

Kim Niles
Participant

Dear Adam:
There are a couple articles already on the shift here at isixsigma. See the first one at https://www.isixsigma.com/library/content/c010701a.asp and then https://www.isixsigma.com/library/content/c010311a.asp on revisiting the shift. The second article also points out several good posts on the subject.
While I was shocked and disillusioned about Six Sigma after first hearing of the shift, over time I began to understand how important it is to Six Sigma for the following reasons:

It first opened my eyes to the possibility that processes can and likely do regularly shift. I had taken several statistics classes and never picked up on this.
It allows an unreasonable goal of six sigma being 0.002 ppm out of spec to an apparently reasonable goal of Six Sigma being 3.4 dpmo.
It is key to providing Six Sigma with novelty needed to distinguish itself from other programs such as TQM.
Since in the real world special cause variation is the norm, data is often serial correlated and or non-normal, the shift tends to negate some of these real world data integrity problems.
It provides insight needed to help me understand how common SPC out-of-control rules can be erroneous. See Controversies and Contradictions in Statistical Process Control at http://www.asq.org/pub/jqt/past/vol32_issue4/qtec-341.pdf
Regarding statistically supporting the generally stated shift, it can’t be done with high confidence. I’m working on a third shift article that may or may not be published in the future with hopes of explaining the shift in more detail. I plan to explain how Mikel Harry sited Bender (1975) and Gilson (1951) to support this position in two different ways (compounding error due to tolerance stacking and normality differences with sample size), how Tadikamalla pointed out how Harry took those articles out of context, and the details of those arguments.
Gregory Watson, ASQ president in 2000 stated in regards to the controversy over the statistics behind Six Sigma that there are three groups of thought, those in favor, those against, and now a new group of those taking the middle ground. He says that those in the middle see the controversy but recognize that management is paying more attention to Quality. He says, “It is acceptable to have a non-purist approach to statistics that gets results quickly, if that is management’s decision.”.
I hope this helps.
Sincerely,
Kim Niles
https://www.isixsigma.com/library/bio/kniles.asp
http://www.asqsandiego.org/contacts.htm#kn

0
#80398

Kim Niles
Participant

Dear Heidi:
Your post provides an interesting opportunity for me to improve the hit rates at the site I administer (see http://www.asqsandiego.org/ or http://www.asqsd.org/ ) since one way to do that is by spreading my url around …. (smile).  Not only are you and others reading this post inclined to visit but spiders pick up the link and give the search engines they are associated with a higher rating for my site.
The site I administer has had rising hit rates every month for 14 months from a few hundred hits per month to now over 70,000 per month.  Each month I report to our board different things I’ve done to either improve the content for attracting repeat visitors or to improve it’s exposure to attract new visitors. Large contributors in controlling traffic rates were difficult to distinguish but believed to be as follows:

Adding meta tags
Adding the additional url ( http://www.asqsd.org/ )
Advertising with 15+ search engines
Adding the virtual library with articles written by our members
Adding the jobs pages
Sending 2 emails per month to our members of updates or other reasons to visit the site.
Getting written up in Quality Progress Magazine
I hope that helps.
Sincerely,
Kim Niles – ASQSD Communications Chair https://www.isixsigma.com/library/bio/kniles.asp http://www.asqsandiego.org/contacts.htm#kn

0
#79943

Kim Niles
Participant

Thanks John:
I found bio and book information on Dr. Kotz and Dr. Johnson at http://www.wileyeurope.com/cda/product/0,,0471128449%7Cau%7C2739,00.html.
KN

0
#79935

Kim Niles
Participant

Dear Rick:
Thanks for bringing this thread to my attention.  Interesting discussion.
Quick comments I can add are:

See isixsigma article on the subject at https://www.isixsigma.com/library/content/c010806a.asp
Ppk has been highly contraversial ever since the AIAG developed it due to the fact that a stable process isn’t necessary.  Montgomery states in his Intro to SQC book something like “Ppk as a complete waste of time”.
Sincerely,
KN    https://www.isixsigma.com/library/bio/kniles.asp

0
#79933

Kim Niles
Participant

Dear Ivette:
.97 x .94 = .91 = 91% for steps 1 and 2.  The rest is not enough information.
Does that help?
KN

0
#79816

Kim Niles
Participant

Dear VH:
Regarding rolled throughput, Tom Pyzdek wrote a good article on this at: http://www.qualitydigest.com/mar00/html/sixsigma.html
Regarding opportunity counting, there are several ways to do it as follows:

Use a fixed number (typically 2 or 3) times the bill of material count to get the number of opportunities.
Use the following formula: Opportunities = C+P+S (number of Connections, Parts, and non-manufacturing Steps associated with the product).
Look at what the customer considers a reject and counts those as opportunities.
One additional point to remember is that the determination of opportunities should be tied to quality characteristics that must occur properly in the eyes of the customer.
I hope that helps.
Sincerely,
KN – https://www.isixsigma.com/library/bio/kniles.asp

0
#79017

Kim Niles
Participant

Dear Carrie:
Great question.  I can’t believe you don’t have a bunch of response posts.  I haven’t seen much on this in the literature so perhaps no one knows the answer.  A team leader obviously has to be a people person, self confident, cost and variation reduction oriented, well trained, motivated, and alert.
I suppose a lot also depends upon the situation.  It might be best to trade off less of a people person for more of a technical problem solver if that is what is needed for a given project.
iSixSigma also published my related article at https://www.isixsigma.com/library/content/c020114a.asp that might help.
I look forward to reading other thoughts.
Sincerely,
KN

0
#78332

Kim Niles
Participant

Interesting question thought provoking comments. Please review and critique the following list of project selection guidelines. It’s supposed to be in order of priority and I just made it up on the fly after reviewing your messages.
Notes:

The potential savings must exceed expected expenditures after including soft costs in all cases.
Actual selection should be performed by the Master Black Belts and Champions as they have the best chance of seeing a projects true potential savings and realistic costs.
In some cases, ROI’s (return on investment; cost justifications) will need to be performed.
Project Selection Guidelines (in order of subjective likelihood)

Strategy based projects in accordance to plan, balanced scorecard, known weaknesses, high risks, etc.
Obvious cost, customer satisfaction, productivity, reliability, or related base projects taken by priority, identified through cost of quality programs, reject data bases, customer complaints, management complaints, etc.
QFD or VOC reports
Areas of particular interest to sales support (i.e. where pretty pictures showing control help sales).
Old Six Sigma project or Kaizen event leftovers or spin-off projects.
Areas where artisan methods can be quantified and or converted to scientific control.
Areas where continuous variables are present but have not been optimized.
By asking why 5 times for every key process in the company.
Process owner concerns
Through Lean, 5S, or safety tour concerns.
What do you think?
Sincerely,
KN – https://www.isixsigma.com/library/bio/kniles.asp

0
#56193

Kim Niles
Participant

Thanks Martin:I also am working on this issue as I submitted a proposal to talk on the subject in January during the next ASQ Six Sigma conference to be held in Palm Springs. Perhaps we can help each other?I agree that SMO’s (Small Small and Medium size Operations) should have a good chance of successfully implementing some form of Six Sigma, especially those in the service industry. My planned approach is to continue to:Review the important aspects of Six Sigma (see my article entitled “What Makes Six Sigma Work” at https://www.isixsigma.com/library/content/c010723a.asp), Review key implementation barriers in relationship to the size of the company, Study SMO’s that have or are implementing what is or might be called Six Sigma.Point out flexibility in traditional models that allow the SMO to tailor fit Six Sigma to their needs. Regarding your specific question, I know of four local SMO’s that fit my related definition of “implementing what is or might be called Six Sigma”. One is a small but recognized software company, two are a Medical product based companies that I used to work for, and the forth is the semiconductor product based company I now work for. I am not in a position to be very specific at this point. I see implementation barriers as being from three categories as follows:Financial Justification – Training costs vs. expected ROI. Other Resource Justification – Limited personnel to dedicate full time to a project. People Training Barriers – Change management, lack of management support. What do you think?Sincerely, KN – https://www.isixsigma.com/library/bio/kniles.asp

0
#76612

Kim Niles
Participant

Thanks Martin:I also am working on this issue as I submitted a proposal to talk on the subject in January during the next ASQ Six Sigma conference to be held in Palm Springs. Perhaps we can help each other?I agree that SMO’s (Small Small and Medium size Operations) should have a good chance of successfully implementing some form of Six Sigma, especially those in the service industry. My planned approach is to continue to:Review the important aspects of Six Sigma (see my article entitled “What Makes Six Sigma Work” at https://www.isixsigma.com/library/content/c010723a.asp), Review key implementation barriers in relationship to the size of the company, Study SMO’s that have or are implementing what is or might be called Six Sigma.Point out flexibility in traditional models that allow the SMO to tailor fit Six Sigma to their needs. Regarding your specific question, I know of four local SMO’s that fit my related definition of “implementing what is or might be called Six Sigma”. One is a small but recognized software company, two are a Medical product based companies that I used to work for, and the forth is the semiconductor product based company I now work for. I am not in a position to be very specific at this point. I see implementation barriers as being from three categories as follows:Financial Justification – Training costs vs. expected ROI. Other Resource Justification – Limited personnel to dedicate full time to a project. People Training Barriers – Change management, lack of management support. What do you think?Sincerely, KN – https://www.isixsigma.com/library/bio/kniles.asp

0
#56191

Kim Niles
Participant

Dear Martin:
This is a very interesting topic to me.  I have some thoughts on this and have experienced what you might be asking for … but first please define SMO.  Does that stand for Small Manufacturing Outfit?
Sincerely,
KN –  https://www.isixsigma.com/library/bio/kniles.asp

0
#76602

Kim Niles
Participant

Dear Martin:
This is a very interesting topic to me.  I have some thoughts on this and have experienced what you might be asking for … but first please define SMO.  Does that stand for Small Manufacturing Outfit?
Sincerely,
KN –  https://www.isixsigma.com/library/bio/kniles.asp

0
#75808

Kim Niles
Participant

Dear PA:
Addressing your question literally, most Taguchi designs are the same as “classical fractional designs” and even full factor designs such as the L8 = 2^3.
Reading between the lines, Taguchi took simple designs that existed and added a few other simple designs that utilize easy to make assumptions of non-existent higher order interactions to compile a group of matrices most experimenters would typically use.  I like to use them when I already know a lot about the process but I still need to screen out special cause variables.
Two potential problems exist with them.  One is that you are making more assumptions and so are trading resources for risk.  The other is that they are usually much lower resolution designs and so you need to follow up with confirmation experiments or lower your alpha risk requirements (i.e. you are not as likely to get significant results with very high confidence but sometimes that doesn’t matter).
I hope that helps.
Sincerely,
KN https://www.isixsigma.com/library/bio/kniles.asp

0
#71761

Kim Niles
Participant

Dear Carlos, et.al.:
Good question.  I just attended a Six Sigma workshop in San Diego where nearly the same question was brought up.
Key contextual aspects include:
1-     Defining what we mean by Kaizen, and Six Sigma.  I applaud your Blitz post (https://www.isixsigma.com/forum/showmessage.asp?messageID=9042), the one by Kunes (https://www.isixsigma.com/forum/showmessage.asp?messageID=8970) and the one by isoquality (https://www.isixsigma.com/forum/showmessage.asp?messageID=9044).
2-     Large companies use “Kaizen blitzes” or “Kaizen Events” both at the same time.  Example, Honeywell (former Allied Signal) uses both in combination (I know this from numerous personal sources).
3-     Six Sigma has been adopted more by very big companies with very big confounded situations and ROI potential as opposed to small companies where the processes are more easily fully understood and the ROI potential is small relative to the cost of implementing Six Sigma.
In my opinion, there isn’t a very big difference between a Kaizen Event and a Six Sigma project other than mandated talent, scope, and time frame (~3-4 days vs. 3-4 months).  Most other basic principles still apply to both.
I see the future of Six Sigma within the smaller company consisting of a lot more merger with Kaizen and the future of Six Sigma within the larger company consisting of a lot more Kaizen and Six Sigma in combination.  The reason for my statements is purely implementation cost justification.
I’ve heard a lot of complaints that the training Black Belts are receiving is not enough to meet the needs of managing a Six Sigma team unless the candidate already has a large head start before the training starts.  One way I see this happening is through Kaizen.
What do you think?
KN –   https://www.isixsigma.com/library/bio/kniles.asp

0
#71759

Kim Niles
Participant

Dear Gary:I applaud your passion for Six Sigma and for your part in making Six Sigma what it is today. I also understand that you are angry in general over the success of Mikel Harry and the Six Sigma Academy. If you are the same Gary I’m thinking of, I understand you are also upset over related business concerns in some way connected with the Academy. I am not in any position support nor denounce anything you’ve said nor am I asking you to elaborate. I am simply asking you to take another look at the exact wording of the letter at https://www.isixsigma.com/library/content/c020131a.asp. I don’t see anything that is “wrong” in the text provided. I really like the second paragraph and the last paragraphs. I grant you that the forth paragraph is self-promotional but every statement when taken word for word does not look “wrong”. In fact, the next paragraph clarifies that no direct claim is taken.I’m sure that there is some truth somewhere in what you say or you wouldn’t be so passionate about it, but I ask you, where would Six Sigma be today if it were not for Mikel Harry and the Six Sigma Academy? Would GE and Allied Signal have taken to it and dramatically changed it’s course? I doubt it….we’d still be attending “Quality Circles” .

0
#70961

Kim Niles
Participant

Perhaps “most myths are due to resistance to change” but if we were to apply the 80/20 rule to this issue, the biggest myth would be:
– “Six Sigma” = six sigma
(i.e. everyone I’ve ever met that is turned off about Six Sigma the philosophy is really just confused that it’s six sigma the mathematical expression).
KN

0
#70738

Kim Niles
Participant
#70626

Kim Niles
Participant

Dear PO:
It’s true that Six Sigma isn’t for all situations.  If the product and process is well understood and the size of the company is very small and all the members of the company communicate well together and the industry is very old such that improvement is not likely and the market is waning such that every bit of profit is squeezed out of old equipment that has been written off years ago and the employees are untrained / off the street with nothing to add anyway, well……I could see where Six Sigma could be a waste of time .
KN

0
#70227

Kim Niles
Participant

It’s got to be a sales thing….consultants at Motorola wanted to distinguish between a good team leader and the best type of team leader.  Marshal arts uses the term “black belt” for their very best students.
KN

0
#69799

Kim Niles
Participant

Dear Craig:
This is an interesting topic that I haven’t seen addressed in the Six Sigma literature other than in either very general terms such as you’ve addressed or from other non-Six Sigma specific books such as ‘Keeping Score” by Mark Graham Brown and “Measuring Performance” by Dr. Bob Frost.  There are a lot of other books on the subject from perspectives of economic value added (EVA), enterprise resource planning (ERP), balanced scorecards, activity based costing (ABC), cost of quality (COQ or COPQ), strategic planning, etc.
Like O.P. suggested above, you must start with the CEO’s vision, then goals and objectives, then tie in a hand full of key metrics that must happen to allow the goals to be met.  After that, 10-30 critical metrics should be developed that stem from the key metrics.  The results of your BB improvement projects are reflected in the critical and key metrics.  However, one of the critical metrics would likely be average ROI from BB improvment projects, etc.  Other books call them critical and performance metrics or add other tiers of metrics but the point is the same.
I hope that helps. Good luck,
KN

0
#69797

Kim Niles
Participant

Two quick comments to add.
1- There are cheaper ways to go.  One company is selling a web based training package for exec training.  Also, once MBB’s are on board and trained, they can start training others.  BB’s can train GB’s etc.
2- Six Sigma is the way it is because it’s evolved from TQM based and other less optimum quality improvement systems.  That doesn’t mean it’s as good as it will get nor that something else wont come along that is even better in the future.
KN

0
#69665

Kim Niles
Participant

Dear beenthere:
If you mean the last point….ok, I’m appealing to our emotions for the fun of the post.  If you mean any other point, I’ve been there and done that for everything…..numerous times, without getting fired .  It’s worked for me.
KN

0
#69661

Kim Niles
Participant

Dear Joe Q:
My motto is “when faced with the crazy situation, get crazy”. Here are some hypothetical thoughts:

If you have the means to send the CEO an anonymous email or “suggestion to the CEO”, etc., then do it by praising him or her on the Six Sigma approach, how it should embed Quality into every employee which is right on track with the philosophy held by today’s best CEO’s (Motorola’s Bob Galvin, GE’s Jack Welch, and Allied Signal’s Larry Bossidy,etc). Then go on to list your problems in really general terms as “weaknesses of our current implementation” or “areas of concern” or “areas for future improvement” but don’t be really specific. So by now you’ve encouraged continued use of Six Sigma and listed the problems, you need to then spell out what he or she might do to solve the problem. Perhaps a proclamation or an edict of some sort, similar to the one Jack Welch proclaimed with regard to GE’s Six Sigma Quality initiative, “No one at GE should expect a promotion unless they are a Black Belt” should be considered to speed things up. Try to list suggestions that don’t require any new financial resources. I don’t really know your situation so this may be political suicide….only you will know.
While green belts are handy for data collection, etc. They are also hard to keep busy when the project isn’t rolling along at full steam. My point here is that you should dig in and take your own data. You really learn the most and gain the opportunity to make the biggest difference. Just dig in and do it all yourself regardless of what is typical Six Sigma. By showing positive results you will gain support. GET THOSE POSITIVE RESULTS.
One very much overlooked aspect of Black Belt training in my opinion is “adaptive learning”. Every project is different in almost every way. The Black Belt needs to adaptively learn what needs to be done and do it. If more resources are needed then you need to spell them out before any further expectation gaps occur. Write up a resource justification request spelling out (1) the problem, (2) all the possible options the company can use to solve the problem, and (3) your recommended option.
TQM really has all the right stuff but due to implementation efforts such as yours with Six Sigma, it’s lost a lot of it’s steam. For the good of all quality efforts the world will ever see, don’t give up and keep us posted on your progress.
Sincerely,
KN http://www.znet.com/~sdsampe/kimn.htm
https://www.isixsigma.com/library/bio/kniles.asp

0
#69638

Kim Niles
Participant

Dear Neynes:
The book below is a good book to explain the various ways companies can implement Six Sigma.  Look for the chapter on the Six Sigma road map.  See https://www.isixsigma.com/books/search.asp?sstr=&type=six+sigma&page=2&sby=
Pande, Peter S., Robert P. Neuman, and Roland R. Cavanagh. “The Six Sigma Way: How GE, Motorola, and Other Top Companies Are Honing Their Performance.” McGraw Hill. New York. 2000.
Basically, there are many different ways depending upon what’s right for the company.  Usually what works best is the top down approach where the CEO buys into it and proclaims a mandate.
Good luck.
KN  https://www.isixsigma.com/library/bio/kniles.asp

0
#69637

Kim Niles
Participant

Dear Marcus:
Follow your heart.  Degrees are used to get you in the door and should always be pursued as possible but any company worth working for will see your energy and enthusiasm for Quality, and so will hire you.
I’ve been going to CSUDH over the internet to get a MS in Quality Science.  It’s a good option for me as it’s easy to fit into my busy schedule.  See http://www.csudh.edu/msqa/msqahome.htm
Good luck.
KN  https://www.isixsigma.com/library/bio/kniles.asp

0
#69621

Kim Niles
Participant

Keki Bhoti wrote about “variable search” in his book:
Bhote, Keki and Adi. “World Class Quality – Using Design of Experiments to Make it Happen”. American Management Association. NY. 2000.
I need to dig it out to see if this is what you guys are talking about.  I remember either his “variable search” and or “multi-vari” techniques supply approximations of interactions.  I also know he calls these techniques DOE’s even though they may not meet classical definitions as such.
KN

0
#69475

Kim Niles
Participant

Dear Allen:
I like your answer and since you do seem “experienced”, I’m wondering if you care to extend the confinds of your definition a bit to include what “we” really include in our measurement.  For example, our project DPMO measurement really consists of some CTQ/CTC/CTD scorecard of sorts which seems to vary from company to company and project to project.  Some include ROI/COQ or other Hidden Factory measurements/Gains in Market Share, etc.
What have you or anyone else reading this seen that can help me with implementation at my company in this regard?
KN

0
#69325

Kim Niles
Participant

Regarding tackling Six Sigma.  I’m not sure if you mean to learn about it or to write about it.  To learn about it I suggest you study this iSixSigma.com site thoroughly.  To write about it, first pick a sub-topic such as gurus, the 1.5sigma shift, history, future, teams, change management, defect oportunities, case studies, etc.  Next buy as many books as you can since the library doesn’t have many (see the “buy books” button above) and post questions.  I took an on-line class that is no longer offered which helped.  There are also a few articles on other sites about the subject that you can search for.
I hope that helps.
KN

0
#69271

Kim Niles
Participant

Interesting topic.  From my studies of the subject, Mikel Harry and Bill Smith get all the credit. Perhaps they should be called originators or master gurus. Bill Smith created the awareness and Mikel took the ball and ran with it. From what I’ve been told, there were upwards towards 19 other consultants involved. Is that true?
We first need to define “guru”. Websters dictionary at http://www.m-w.com/cgi-bin/dictionary defines “guru” in our context as:

a teacher and especially intellectual guide in matters of fundamental concern or
one who is an acknowledged leader or chief proponent or
a person with knowledge or expertise.
Therefore, by this definition, I would consider all authors that have produced well selling books to be gurus including:

Keki Bhote who was involved in the development of Six Sigma (per his books)
Forrest Breyfogle
Subir Chowdhury
Mikel Harry, Ph.D., and Richard Schroeder
Peter Pande, Robert P. Neuman, and Roland R. Cavanagh
Tom Pyzdek – not only for his books on the subject but for all the other things he’s done as well (i.e. conferences, websites, books, founding the IQF, etc.)
Other gurus in my book:

A few dozen guru consultants, practitioners, and college teachers.
Most every Master Black belt I’ve met so far.
Who have I missed?
KN

0
#69188

Kim Niles
Participant

Dear Arvin:
Regarding your comment “what is Shainin”, see Keki Bohte’s book “World Class Quality” at: http://www.amazon.com/exec/obidos/ASIN/0814404278/qid=1002820151/sr=1-6/ref=sr_1_6_6/102-3484353-9791333
I took a class from Keki who claimed to be the only licensed author of Shainin’s techniques.  These techniques are ways to visually detect approximated statistical significance and confidence as one would with a DOE.
Dear Paul:  Please send me a copy at [email protected]
Thanks.
KN http://www.znet.com/~sdsampe/kimn.htm

0
#69104

Kim Niles
Participant

One more thought:  I found a web site that proclaims Six Sigma to be a philosophy…interesting.
See:  http://www.thesamgroup.com/sixsigmafaqs.htm
What is Six Sigma?  Six Sigma is a quality philosophy that uses customer-focused goals and measurements to drive continuous improvement at all levels in any enterprise. The goal is processes that are so robust that defects are measured at levels of only a few parts per million. Six Sigma implementation requires leadership from top management, since it must be embraced throughout the organization.”
KN

0
#69067

Kim Niles
Participant

Thanks Rajanga:
I am not familiar with EFQM but I did a quick internet search for it and found many non-English sites (German?).
Can you tell me more about it?
I still am having troubles with the question of QIS vs. QMS. Perhaps my confusion lies in not having a clear definition of either term. From your post I am led to believe that a QMS requires more than customer focused, cost oriented, process based improvement as with Six Sigma. You stated: “People results, Customer results & Society results being achieved by Leadership – Driving Policy & Strategy, People, Partnerships & Resources – through Processes – leading Ultimately to excellence In Key performance results” which I see as placing importance on aspects not likely to be 1st order coupled to making money (NNDC = Not Necessarily Directly Coupled aspects such as policy, strategy, society, etc.).
I argue that any Six Sigma program should naturally contain all of your NNDC aspects given time as the company matures toward Six Sigma.
What do you think? Have I mis-understood your message?
Sincerely,
KN

0
#68915

Kim Niles
Participant

I did a quick search of Motorola’s website and found very little on their Quality Management system.  In fact, they have a great history site at http://www.motorola.com/content/0,1037,115-280,00.html with information going back to the 20’s but they don’t even mention receiving the Malcolm Baldridge award or Six Sigma.
However, they did have a site that infers that they are Six Sigma at http://www.motorola.com/MIMS/ISG/ING/quality/, so since they are near Six Sigma in reality and therefore would not want to create a web page defect, ergo, they must still be practicing Six Sigma .
KN http://www.znet.com/~sdsampe/kimn.htm

0
#68958

Kim Niles
Participant

Oops I missed it, Thanks. KN

0
#68223

Kim Niles
Participant

It looks like iSixSigma just found one of their 3.4 allowable defects in this thread (see original post) .
Quality management systems are only as good as their acceptance and collective use throughout the corporation. Even the smartest individuals in the long run can’t outperform the cross-functional well-trained team. It’s a numbers game. The more Quality minded individuals you have the better you will perform.
I’m sure most all of us have worked for companies where “Quality” means “Inspection” so I don’t need to remind us how comically bad quality in a company can be. Six Sigma aggressively mandates engagement of all employees to think like traditional Quality department personnel. It’s the first Quality Management system ever that even comes close to successfully spreading quality out away from the department into it’s employees. In terms of Quality tools and techniques, it’s not much different than all those other systems, but there are little differences that make the big difference. See: What Makes Six Sigma Work at: https://www.isixsigma.com/library/content/c010723a.asp
My point here is that all the “bad” accusations I see in this string about Six Sigma are itsy-bitsy pale in comparison to the big picture value.
KN https://www.isixsigma.com/library/bio/kniles.asp

0
#68181

Kim Niles
Participant

Interesting article and comments.  I’ve run into the same mind set a few times, always from really smart people.  Unfortunately, they are usually the ones that force a company to reach out towards change….because their “intellegence” is so narrow that they end up causing the problems.
TEAMS ALWAYS OUT PERFORM INDIVIDUALS
However, in his/her behalf, I realize that Six Sigma projects are not for everything.  The cost of “fixing” the problem must be less than the long term cost of the problem (including soft costs).  With that said, as a company moves towards Six Sigma the differences must get smaller and smaller.  I can’t wait to find out.
KN   https://www.isixsigma.com/library/bio/kniles.asp

0
#67963

Kim Niles
Participant

I say it can work.  Tom Pyzdek even wrote an article on how to improve your baseball game using DOE.  See http://www.qualitydigest.com/july00/html/sixsigma.html
KN

0
#67965

Kim Niles
Participant

I spent a lot of time going through all the books I could find on this subject to determine that reason #8 is in common with those authors.  See “What Makes Six Sigma Work” at: https://www.isixsigma.com/library/content/c010723a.asp ; Reason #8: “Quality tools that never get used are thrown out. If we don’t need them, why spend time learning how to use them”.
However, I personally enjoy this discussion and feel that one can never learn too much about tools and techniques, as long as it’s cost effective to do so (that’s the kicker).
KN

0
#67678

Kim Niles
Participant

Good point.  I just went through Lean / Flow training and so your post caught my eye.  Quality Digest / Tom Pyzdek has a related article at  http://www.qualitydigest.com/jan00/html/sixsigma0100.html  you might want to review that lists the synergistic aspects you touch on.
Perhaps we need to come up with a set of lean tools such as the 5S’s that Six Sigma Black Belts need to learn (see: BB Body of Knowledge at https://www.isixsigma.com/library/content/c010618d.asp ).
KN  http://www.znet.com/~sdsampe/kimn.htm

0
#67560

Kim Niles
Participant

Another Thought.
Most would seem to agree that a capable process is one with a high CpK value (1.33 or better) but by dictionary definition of capable, a process would be capable if it can produce one good part consistently. If it can produce one part then with tweaking, it is capable of producing 100% good parts, etc.  With this in mind, Grant and I have reached an agreement that a stable process is not necessarily capable and a capable process is not necessarily stable……  I’ve got a headache …
KN

0
#67455

Kim Niles
Participant

Dear Ken:

Yes, the idea of having a page of guidelines (copied out of our emails in this thread; incl “paranoia”) sounds like something I would do if I were in charge of the site.

However, I am pleased so far with the management of this site and so am optimistic that iSixSigma will continue to grow and maintain it’s well respected status with or without guidelines and or other slightly off-focus / additional information.

KN

0
#67425

Kim Niles
Participant

I’ve been privy to some great off line “discussion” of terminology, posting guidelines, and “conversational direction pointing” with Ken, Bob, and Grant that I’d like to highlight in order to maintain the course of progress towards our goal of saving the future of all science from heated debate over “what is a stable process”.

First of all, our string was summarized by Kerri Simon at:
https://www.isixsigma.com/library/content/c010625a.asp

Directional topics to explore are as follows:
1. What are the economic properties of a stable process?
2. What are the properties of an unstable process?
3. How would we define “measurement capability”?
4. What experiences do we have supporting capability measurement processes?

Here are highlights of post guideline discussions:
1. The discussion should be limited to the topic, and not involve any exchange of negative remarks of any kind.
2. Let’s try to keep the discussion based on known understanding, not hearsay.
3. When applied statistical methods for various techniques are provided, they should be accompanied with supporting reference which includes author, title, and page numbers.
4. Any methods suggested without references will be considered personal opinions, unless a derivation can be made from known reference.
5. Personal opinions or experiences should not be considered theory.
6. We should each keep in mind the primary goal of this discussion is to come to a common understanding and language of the topic. Any discussion point that does not have a line to the central topic will only add to confusion and misunderstanding.

I hope this starts some fresh ideas.
KN

0
#67222

Kim Niles
Participant

Admirable comeback. Anger gets the best of all of us from time to time. Makes one want to go squirrel hunting .

KN

0
#67205

Kim Niles
Participant

Dear Bob / anonymous

Thanks for taking the time to post your sincere thoughts regarding the subject. However, regarding the “ASIDE”, I have to comment that you’ve strayed a bit off the subject in what would appear to be paranoia, anger, and or bad communication.

I’m glad you posted under “anonymous” because you aren’t the only one that has done this so I can more easily address it in generic terms. Three times during this “discussion” from three different people (assuming you are different and from your post I have high confidence in that), on-line and off, I have heard the same type of thing you posted. That of people who are afraid of others that post using different names, of others who’s ideas are so crazy that they must be stopped or they will do harm to the truth, and or of those that are trying to discredit others. Think about it, it’s paranoia, anger, and or bad communication.

Ways to combat this fear:
1- Check the properties of the email address which often shows the true person behind it.
2- Stay away from this site all together
3- Learn to accept the worst and move on from that. Accept that some people are “bad guys” and that there is nothing you can do about that but lead by good example, post the truth as you know it, move on, and hope that rational people will see the difference.
4- Try to be a better listener. Those “bad guys” aren’t bad because they just like to see others suffer. They are “bad” because they are having a hard time communicating their strongly held point of view. By really understanding their point of view, you might change yours…or at least learn something. Worst case, they relax because you really tried to understand them.
5- Reference and or refer the “bad guy” to the on-line bible of internet etiquette at: http://www.fau.edu/netiquette/net/index.html

By the way, I have even been accused of posting anonymously in order to slander others which I state loud and clear has never happened. That’s all I can do.

Sincerely,
Kim Niles – Quality Engineer
Delta Design, Inc. (www.deltad.com)
Phone: 858-848-8000; ext. 1295
http://www.znet.com/~sdsampe/kimn.htm

0
#67187

Kim Niles
Participant

Well, this isn’t the full summary of all our posts I promised but it’s a first step.

I’ve reviewed all the posts and come to realize that they all fit into two global method categories for defining what a stable process is.

The first category is using statistics to define a stable process. Most of our posts were related to this method given that it seems to have the most controversy. Sub-categories of this method likely include:
1- Distribution type and importance
2- Variation type and importance
3- Entropy and philosophical potential

The second category is using economics. We didn’t discuss this much but it makes a lot of sense to me. Shewhart was mentioned as taking this line of argument in defining what a stable process is. I suppose that any process that consistently produces good economic results relative to expectations and or specifications could be considered “stable” regardless of how much or what type of variation there is within those expectations.

Why can’t we just accept this economic model as our definition? Is it too simple? Can we find a hybrid definition that includes statistical measurements as well?

We are making progress!!
Thanks.
KN

0
#67132

Kim Niles
Participant

Well, I count 30 posts not including this one which when printed out is over 16 pages in 10 point text. The sad thing is that we are no closer today to defining what a stable process is than we were a week ago when this “discussion” started.

From a really global perspective, I think we can all accept a definition that can’t possibly exist, that of “A stable process is one that contains only common cause variation”. Why is it that we can’t allow any reality into our definition of a stable process without a lot of debate? We just aren’t thinking outside the box!!

My next step is to do as the Quality gurus would do and start placing all the key points on sticky notes, then organize them into categories using affinity diagrams in order to make an effort to think our way out of this.

“Talk” to you later. Sincerely,
KN http://www.znet.com/~sdsampe/kimn.htm

0
#67070

Kim Niles
Participant

Statistical software packages like Stat-Graphics will perform DOE analysis on attributes. For SGraphics you just fill in a check box.

You might be wanting to perform paired comparison, where the “DOE” is non-mathematical and very simple based on Tukey’s ranked quick tests. This method can differentiate interaction effects but in general terms such as “approximately 90% confidence in xx significance from factor A”. See: Bhote, Keki and Adi. “World Class Quality – Using Design of Experiments to Make it Happen”. American Management Association. NY. 2000.

I hope this helps.
KN http://www.znet.com/~sdsampe/kimn.htm

0
#67062

Kim Niles
Participant

I just thought I’d clean up lose ends as I prepare to summarize all the different thoughts. I’ve re-posted below the other three related topic posts made elsewhere on this site recently in an effort to look at it all at once. I’m having a great time with this, and learning a lot, thanks.

+~+~+
Stable Process: Posted By: Rajanga Sivakumar Posted On: Wednesday, 13th June 2001

Any process which performs in a predictable manner over a period of time i.e with known variances can be considered stable. However, a stable process does not necessarily mean that it is the best performing or ideal process. If the process can be improved to an extent that only “natural variability” remain, then it could be considered as the ideal process. This is my understanding and it may not be very correct.

Rjanga Sivakumar
+~+~+~+~+
Stable process: what is it? Posted By: SAMIR MISTRY Posted On: Monday, 11th June 2001

a stable process in simple terms is a process of which all the causes of variations are known and are acted upon and the process is then governed by common causes of variations, where the output of the process is fairly predictable. management decision requires to further increase the capability of the process.

+~+~
Re: Stable process: what is it? Posted By: Ken K. Posted On: Tuesday, 12th June 2001

I wouldn’t go so far as to say that “all the causes of variations are known”. That is a pretty extreme statement.

I would tend to say a stable process is one that is comprized of mostly common cause variation, as opposed to special cause variation. As you hinted, that common cause variation will be comprized of a whole bunch of sources of variation, some will be knowable and some won’t.

The whole idea of process improvement is to understand many of those sources of variation and try to remove/control them.

0
#67037

Kim Niles
Participant

Wow, Lots of really super posts…. THANKS!!

So where are we now? The problem as I now see it with your help is that we are looking at a third order problem with first and second order thinking. There are obviously many different ways to convince many different people of what process stability is depending upon many different types of situations.

We either need a universally acceptable solution supported by respectable organizations or some other more obvious third order solution (via breakthrough) that fits all types of situations. For example, ASQ and ASA could state that they support AIAG’s definition of process stability with one of “within 9 sigma regardless of distribution shape and or process mean shifting”. One third-order example could be some very descriptive formula that allows anyone who reads it to fully understand the process being described in comparison to any other process. For example, the phase xx process being measured with xx capability is xx% stable over xx time given xx number and xx types of distribution assumptions.

Any other ideas?

0
#66996

Kim Niles
Participant

Dear Ken: You were right, I found the excellent article you suggested at http://www.asq.org/pub/jqt/past/vol32_issue4/qtec-341.pdf and list a few highlights as follows:
1- One purpose of SPC is to distinguish between common cause and assignable cause variation in order to prevent overreaction and underreaction to the process.
2- The distinction between the two types of variation is context dependent such that they may switch places from time to time.
3- The distinction can also change with sampling as one only wants to react if it is practical and economic to do so.
4- A process is “in statistical control” if the probability distribution is constant over time.
5- Deming (1986) advocates more than meeting specs…reduce variation; Taguchi (1981) advocates variation reduction until it isn’t economically advantageous.
7- It is very important to distinguish control chart phases; phase 1 ~exploratory data analysis, phase 2 = in control.
8- To view control charting as equivalent to hypothesis testing is an oversimplification
9- Control limits of the X and R charts assumes normality yet non-normality appears to have little effect (Burr 1967).
10- probability of SPC signals varies depending upon distribution shape, the degree of autocorrelation in the data, and number of samples.
11- Wheeler (95) states autocorrelation coef >0.6 is significant, otherwise not.
12- Deming…the shift in the mean of a normal distribution may also be shifting normally such that no process measures can be perfect.
13 – Discussion on Bhote pre-control: process in control shows that it is good but it might not be capable based on specs that aren’t shown.
14- ASQ references Bhote for the CQE yet Bhote refers to control charting as “a waste of time” and DOE as “of low statistical validity”. [good points but I support ASQ for this one]
15- New process adjustment strategies include regression-based, multivariate, variance components, variable sampling, change-point techniques, etc.
16 – The 7 or more consecutive points control chart method is ineffective and should be discontinued.
17 – The scope of SPC should be broadened to include more understanding
18- One communication problem is that researchers put narrow contributions into the context of an overall SPC strategy.

0
#66970

Kim Niles
Participant

Montgomery states in his book: Montgomery, Douglas. C. “Introduction to Statistical Quality Control”. Wiley & Sons, Inc. New York. 2001. 4th ed. Pg 372, that in 1991 the Automotive Industry Action Group (AIAG) was formed with one of their objectives being to standardize industry reporting requirements. He says that they recommend Cpk when the process is in control and Ppk when it isn’t. Montgomery goes on to get really personal and emotional about this which is unique to this page of this book and other books I have of his. He thinks Ppk is baloney as he states “Ppk is actually more than a step backwards. They are a waste of engineering and management effort – they tell you nothing”.

While Montgomery gets frustrated over the use of Ppk, he does a poor job of explaining what a stable process is. I respect his works just the same as he alone has even attempted to try to explain the difference between a stable process and a non-stable one.

I plan to continue this debate under a different heading (see: What is a stable process?).

I hope this helps.
Sincerely,
KN – http://www.znet.com/~sdsampe/kimn.htm

0
#66768

Kim Niles
Participant

I’m really surprised that you haven’t had multiple posts on this by now, as it is a weak and controversial topic of Six Sigma. Perhaps the fact that it is a weak area is the reason…no one knows.

Tom Pyzdek advocates Net present value but I doubt it’s used much.

I believe the goal is to weed out the hidden costs so that improvement efforts are realistic. I like to get down and dirty by calculating out every cost of a project from material costs, material overhead, labor costs, labor overhead, machine costs, machine overhead, rework costs, scrap costs, SG&A costs, costs of alternate plans, etc. because sometimes things pop up that are hard to see.

What is cost avoidance?

I hope you get more posts because this is really an important topic to discuss.

0
#66769

Kim Niles
Participant

Interesting subject but have a couple questions:
1- How are availability, efficiency, and quality determined? How often the machine is ready for use? How much throughput the machine had? the machine yield?
2- Patterson et al.,1997 was sited without a title. How can I learn more?

KN

0
#66694

Kim Niles
Participant

Here are a few Six Sigma urls I put together:
+~+~+
Six Sigma Research Information Top Bottom

http://www.servqual.com/kano.html Information on the Kano model

http://www.dosixsigma.com/sixsigmadefinition.htm ; Six Sigma Info

http://www.asq.org/products/sigma/ ; ASQ Six Sigma Background

http://www.ge.com/annual99/letter/letter_three.html ; GE 6Sigma Letter

http://www.gess.ge.com/sigma.asp ; GE 6Sigma Quality Page

http://www.traininguniversity.com/magazine/may_june00/lookat.html ; Motorola Univ 6 Sigma