iSixSigma

SixSigma criticism

Six Sigma – iSixSigma Forums Old Forums General SixSigma criticism

Viewing 90 posts - 1 through 90 (of 90 total)
  • Author
    Posts
  • #45029

    All,
    I found the following internet quotation from the director of “The Toyota System for Service Organsiations” on the internet.

    Now many of us agree there is too much micro-management, and much of SS seems inflexible, many people spend too long at their desks writing reports, and there are too many inappropriate targets; but there is nothing in TPS against ‘Define’ or the pursuit of process excellence. On the contrary, in some cases, process excellence (adequate process capability) it is the only way to reduce Muda.
    Andy

    0
    #145771

    Marlon Brando
    Participant

    Agree.
    To  solve  this  problem  we  have to  apply  the  “Gemba-Kaizen”.
    No  need  to  write  long  reports,one page  containing a  strict  time  schedule  (done -not  done-responsible  person?).
    Involve  management  (may  be top  management) to  avoid such status.Top  Management  support is  a  must  in  this  case.
    Even  meetings  should  be  short 1-2  hours  is  fair  enough .
    Best  solution  in my  opinion is  to  apply Lean SS,or  to  include  TPS  in  the DMAIC Concept  and  evaluate  the  results.
    SS alone  is not  enough ,so  I  suggest  the  following  formula:
    Lean-SS + Kaizen Management +TQM + Change  Management =
                                   SUCCESS
      

    0
    #145774

    Six Sigma Shooter
    Member

    Lean-SS + Kaizen Management +TQM + Change  Management =
                                   SUCCESS
    Success = A theory of management with a stated Aim+Strategic planning / SWOT analysis+knowledge management+change management+the “tools” and “methods” (Lean / Six Sigma / TQM / SPC / whatever)+balanced scorecards / dashboards+ . . . .?

    0
    #145776

    Marlon Brando
    Participant

    Agree.Thank  You
    Same  formula  with  more  details  and  elaboration.We  shouldn’t  fight ;the  concept  is  the  same.Cheers

    0
    #145778

    Six Sigma Shooter
    Member

    Not fighting, Marlon.  On the contrary, I agree with what you and Andy have posted.  I would challenge that they are “the same.”  I felt there were some significant factors left out of your model and wanted to make my additoins, spur on the conversation, and see what others have to say.
    Best regards
    “Shooter”

    0
    #145788

    Airedale
    Participant

    I find this discussion interesting, as it appears to confirm what I am trying to push at my current employer. That is that there in no one “correct” approach to process improvement (no silver bullet) but rather that each approach is merely another tool in a large tool box. Thoughts?Airedale

    0
    #145790

    EdG
    Participant

    Hey all, How about this:  Once the abnormal is distinguished from the normal, use whatever means necessary to make it right.  Right can only occur once the root cause is found.
    Notice, I didn’t use any tools, theory, methodology, or titles in this.  Why?  Because I don’t care which one was used as long as we have “made it right.”
    You like???

    0
    #145795

    Airedale
    Participant

    EdG,
    Ok, I will buy into that to a certain amount, as it avoids the process improvement PhanBoi saying my approach is better than yours. However, you need know the tools and how to apply them. You can’t use a screwdriver to take off a hex head nu and while you can use lock jaws on a round screw, you may wreck the screw in doing so. Airedale

    0
    #145802

    Brit
    Participant

    Seems like this post is in agreement – I’d like to add one thought.
    In organizations where a process has not been introduced, it is godd to have some type of structure.  I like the DMAIC structure and then apply the tools that are needed depending on the problem (lean, ss, toc, etc.).  This provides a framework however leaves flexibility to use the tools that will provide the most good.
    In my org, we either have a long project for a complicated issue (no real solution available) or do a Kaizen event for other issues that can be more readily solved. In any case, we use the same framework of DMAIC.  Org used to use PDCA, but found that controls were rarely in place to prevent reverting back to the old ways.
    my 2 cents

    0
    #145807

    Marlon Brando
    Participant

    I’m  glad  that you  agree.Based  on  your real field  experience  can  you  give  a  brief  description of  the Kaizen Blitz ? Was  it  a  success,considering  the  short  period (3-5  days)?Have  you  really  achieved  some  tangible  results?thanks  and  regards

    0
    #145808

    Orang_Utan
    Participant

    Too many cooks spoil the soup. Too many tools spoil the success.A company I know is running three types of CI i.e. six sigma, TPM and task force (kaizen) under different departments by a same group of people simultaneously.The end result is their resources are spread out too thin and CIs are out of focus.

    0
    #145810

    Marlon Brando
    Participant

    Excellent.Wish  that you  could  elaborate  more,just  to  explain  the  cause  of such failure.I  heard  many  “pretenders” from  big companies telling lies about  having  several mixed CI’s methodologies  at the  same  time.I couldn’t  believe  them .I  prove  to  be  right.thank  you

    0
    #145811

    Neeraj Katare
    Participant

    I do agree with this view, my experience in leading Six Sigma as well as Lean in couple of organizations, tell me that if we try to resolve all our process issues with Six sigma, then we may not get best result always, the methodology (Kaizen, JIT, DMAIC or DFSS) must be selected based on project scope, type and timelines. The urgent issues can be resolved by simple root cause analysis. Applying Six Sigma on critical urgent issues can be misleading and waste since the overall DMAIC approach involves many time taking steps, some of the deliverables from SS project also focus on long term return rather than short term gains.

    0
    #145812

    Orang_Utan
    Participant

    I see management is too greedy to chase after many unplanned goals without considering TOC on their people.

    0
    #145817

    Brit
    Participant

    Good point. That is one reason why I chose to implement under one structure with many tools available.  Also have only one group responsible for the improvements so that project responsibilities aren’t spread out.

    0
    #145818

    Airedale
    Participant

    Brit,
    “That is one reason why I chose to implement under one structure with many tools available.” – your are right on the money “Also have only one group responsible for the improvements so that project responsibilities aren’t spread out” – And again… I am from IT and I can tell you that no company can survive by saying “we will use only Microsoft software on Dell computers”. There are universal truths… there is no silver bullet and there is no replacement for knowledge, understanding and experience.

    0
    #145820

    Six Sigma Shooter
    Member

    Amen!  The never ending journey of learn, adapt, apply, improve . . .
    Regards,
    “Shooter”

    0
    #145823

    Six Sigma Shooter
    Member

    Marlon,
    Unfortunately, it is true.  Some very large and well known companies separate their CI efforts under totally different silos.  It isn’t pretty.  The infighting for who gets credit for what so that they can justify their existence is ridiculous and nauseating.  The casue of it?  Well, I guess one could cite “stupidity” on the part of management for doing such things, but, being in a kinder / gentler mood today, I’ll just chaulk it up to they just don’t know any better.  Who knows why they do it – I just wish they would stop.
    Best wishes,
    “Shooter”

    0
    #145832

    EdG
    Participant

    It goes without saying that you must understand how to use the tool to appropriatly apply it.  I am trying to stress the fact that “my screw driver is better than your plyers for…” is a waste of time.  If I need a screw driver, so be it then I’ll use one. 
    But the one-ups-manship is a waste of time.  Who cares which is the best tool in the toolbox, they all serve their purpose at some point in time.
    Comprendere???

    0
    #145840

    Airedale
    Participant

    EdG, understood. I guess I am reflecting the environment I am in and have assumed (wrongly?) that most other environments similar to it. Here they chase the silver bullet and when it fails it is off to the next silver bullet.(this is not the first place I have worked at that does this) After a few years of everything six sigma it is now everything lean. Lean is attractive because of it’s simplicity in approach and that you do not have to spend weeks to get the basic understanding of it. It is not one-ups-manship I face but rather that because the tool was implemented incorrectly management has assumed it is an invalid tool and no longer should be considered.
    Airedale

    0
    #145843

    jtomac01
    Participant

    My experience in leading & executing kaizen’s (sorry we didn’t call them blitzes, probably another marketing ploy for some book) go something like this:
    We used them tactically to implement TPS and improve an area of focus. Prep work began a couple of weeks before entering the area, where data would be collected on Standard Work, WIP, C/T, layouts, team identification, etc..
    Day 1 – 1/2 Day training of TPS & Lean tools tailored for that area we were kaizen’, for example is an area was having trouble meeting TT due to set-up it would focus on SMED, if it was 1st time for anything we would do a full TPS training
    1/2 day walking the area, determine TT begin to measure c/t
    Day 2 – all about current state, understand & measure everything, determine bottleneck & begin light trystorming
    Day 3 all about trystorming (the whole PDCA cycle several times a day).
    Day 4 Trystorming & begin to implement controls (visual & simple), measure improvement, complete executive report out
    day 5 Complete kaizen newspaper (action item list), report out
    Post week 1-3, continuous monitoring by Supervisor/Manager & Director of area along with CI agent as to if controls stick, if not answer why.
    Yes,  alot of tangible results.  Very hard to argue when you moved an entire shop of lathes/mills around & you can visually see the throughput & quality.

    0
    #145846

    EdG
    Participant

    Airedale,  I think you would be surprised how similar our circumstances are.  Maybe that is why I have changed my opinion to what it is.  When I am working with a team, I purposely will not tell them that the tool is a six sigma, lean, TOC, TQM, or (you name it) tool.  I’ll simply introduce the tool as “I think this will help us understand the situation better and here is how we use it.” and go from there.  Good luck…

    0
    #145848

    mand
    Member

    True but the most “ridiculous and nauseating” part is for the profeesional CI groups watching BB idiots with 4 weeks training, running around pretending to know it all.

    0
    #145849

    Profee CI
    Participant

    Profession CI? What qualifies a person to do that?

    0
    #145850

    jtomac01
    Participant

    Interesting, in the mid 90’s the company I was working for at the time was doing something similar.  Six Sigma & TPS at the same time, all part of a Continuous Improvement team.  It did turn into a silo situation where the philosophers studied Dr. Mikel H. and the rest of us went about improving the process using whatever we needed to make it work.

    0
    #145851

    mand
    Member

    I find it hard to image how thinkers would be bothered studying Mikel.

    0
    #145852

    jtomac01
    Participant

    I guess I should clarify.  They spent the majority of their time discussing & arguing the Six Sigma philosopy (thus whatever Dr. Harry wrote was the latest thing) rather then doing projects & leading improvement efforts.  To their credit they won over the Top Leadership through their slideshows, discussions, etc.  Of course after awhile they began to take credit for what was happening on the side from TPS and well that is when the rubber met the road.

    0
    #145856

    mand
    Member

    I think I see what you mean.  Mikel’s musings are rather cryptic … for example the switch in his “explanation” of his 3.4DPMO from the 1.8 factor from his stacks of disks (how crazy was that one), to his subsequence Chi-square approach … both equal nonsense but it takes some time to understand just how foolish.

    0
    #145860

    Jtomac01,
    I worked in Motorola waferfabs in Austin from 1984 until 1990. We broke every yield record in the book – we even had the ex-Director of Hitachi’s R&D (25 years) come and join us! Guess what we never saw hide nor hair of Dr. Harry. Even when I was Chairman of the Six Sigma Steering Committee in Austin, I never saw there once.
    In fact, I only ever saw him twice. Once when he taught a class on DFM in Oakhill using some kid’s toy, and then later when we gave a presentation to the Quality Committee in Phoenix in 1988 – after we were told to put the 6s logo on all our Powerpoint Slides.(The contents of our slides included some principles of TPS.)
    I should also mention I toured several waferfabs in Phoenix, and everyone of them practiced ‘one-by-one’ formation in one form or another – either wafers, batches, or steps!
    The sum total of Dr. Harry’s contribution was ziltch – which only goes to show – “those that do the most are valued the least; and those that do the least are valued the most.”
    Andy

    0
    #145862

    Barry U
    Participant

    As an occasional lurker here, I thought that you were a supporter of Dr Harry.  Do you feel he has made any real contribution since 1988 ?

    0
    #145863

    Barry,
    I only wanted to make the point that Dr. Harry contributed very little towards Austin’s successes, or, according others in Phoenix.
    However, there is no doubt he made a substantial contributions in other companies, notably G.E., who did share Motorola’s level of management sophistication, Juran training, etc.
    Another issue is how much Dr. Harry’s version of Six Sigma would be able to help other sophisticated companies, such as Rolls Royce, who already have a high sophistication in Project management, Systems Analysis, Statistics, etc.
    In my view, they would do better to follow Motorola’s original path and use some elements of TPS.
    Recently, I met someone else at Minitab’s open day who also had some experience of component ‘matching,’ unheard of in Dr. Harry’s version of Six Sigma, but quite common in TPS, which could make substantial differences to performance. (Matching is an acknowledgement of multivariate relationships between components and process steps.)
    In summary, I’ve nothing against Dr. Harry personally, and I don’t deny his success – only some of the marketing claims and those of his alter ego Reigle.
    Cheers,
    Andy

    0
    #145865

    Barry,
    I meant to write … did NOT share Motorola’s level of management sophistication
    <However, there is no doubt he made a substantial contributions in other companies, notably G.E., who did share Motorola's level of management sophistication, Juran training, etc.)
    In other words, G.E. were messed up ‘big time!’
    Andy

    0
    #145866

    Marlon Brando
    Participant

    Thank You  .I  appreciate  your reply.Hearing  always about  “
    KB” ,but  this  is  the  first  time  when I  see real outcome with brief tangible  results.This is  really  what I  call “AV Report”.Hope that  all  others  in  this  Forum will  learn  from  you.Cheers 

    0
    #145867

    Marlon Brando
    Participant

    Great  Conclusion

    0
    #145868

    Marlon Brando
    Participant

    Out  of  curiosity I wonder  why  is  everybody here  against  Dr Harry?It  is  becoming  a  “Gossip” Forum.All  I  wish is  to  ask  Dr.Harry to  read  all  that  and  to  have  the  opportunity  to  defend  himself.   

    0
    #145869

    He’s already done that under a number of different guises.
    Just my opinion,
    Andy

    0
    #145870

    Barry U
    Participant

    I’ve read that Motorola was TQM before you “put 6s on your slides”.  I hadn’t heard they had elements of TPS. How would you describe their overall approach ?  Did Bill Smith play any significant role or were the changes all due to Dr Harry ?

    0
    #145871

    Marlon Brando
    Participant

    Are  you  sure.Anyhow  such postings  can  act  as  a  free  promotion for  him .It  could  even  help  to market  his  books.thanks

    0
    #145872

    Marlon Brando
    Participant

    According  to  my  knowledge  :Bill Smith  is  considered  as  the  “the Father  of  Six  Sigma”???

    0
    #145873

    As I mentioned previously, most semiconductor waferfabs had a similar approach back then. I’ve been out of it for awhile now so I can’t draw a comparision with what happens now.
    If you accept the two pillars of TPS are ‘one-by-one’ confirmation and ‘Just-in-time,’ then you’ll understand why semiconductor engineers take a similar approach – particularly as an insurance against yield crashes. I’ve lived through several of these and without confirmation of correct processing, and equipment set identification, they can be extremely difficult to solve.
    To support my argument about the importance of 100% inspection/testing, which I regard as value-enabling, I mentioned Yosinoba Kosa of Hitachi, who served in Motorola’s MOS 8 for several years – so it’s not surprising we would use similar methods to MOS 7 (Japan) and TQC (Hiatchi) which is based on Tai-ichi Ono’s discovery at Toyota.
    Many people find this surprising especially as they’ve been told ‘inspection’ is wasteful, but in many of the cases I’m referring to, the measurement is performed by automatic test equipment, such as a Perkin Elmer wafer inspections station, etc.
    One of the potential problems with Lean is a lack of recognition of ‘value-enabling’ steps.
    Andy

    0
    #145874

    Marlon Brando
    Participant

    Great Description:but  I  still  wish  to  know  more  details  about  obstacles ,difficulties and  quick  wins (Brief Bullet-wise )(if  you  could).thanks  and  regards.

    0
    #145875

    Marlon Brando
    Participant

    Andy
    With  your  contiuous  great AV contributions in this forum.Do  you  have  any  published  book  sofar?If  Yes please  let  us  know  the  publisher as  I’m  interested  to  have  a  copy.thanks 

    0
    #145876

    Marlon Brando
    Participant

    Agree as  I’m  doing  the  same (but  I  don’t  like  that?/).On  the  hand  why  the  Dr. is  not  denying the  name  of  the  used  tool?I  woner .Always  talking  about  “Buzz” words.In  my  opinion classifying  those  tools allows professional to take  pride  of  their  work and it  may  help  to  discover  ‘pretenders” or  “bad  consultants” Just  my  opinion

    0
    #145894

    Marlon,
    It’s all been done before. I see no reason to write a book about a Japanese invention, change the name of concepts, etc.
    As someone else said in the forum – why not read the original papers, articles, etc.
    My advice would be to read the article I posted very carefully and put it’s conepts into practice, along the Japanese concept of ‘process excellence.’
    Cheers,
    Andy

    0
    #145900

    Barry U
    Participant

    Thanks Andy.  But what happened when you started “putting 6s on your slides” ?  Your comment sounds as though you are saying it was business as usual but under a different banner.  Did Bill Smith play any significant role or were the changes mainly Dr Harry’s marketing ?

    0
    #145911

    EdG
    Participant

    I don’t deny the name, just omit the source.  For example, “Here is a QFD and this is what it does for us, so lets go through and generate one to help us…”  Why muddy the waters with, well this is a Six Sigma tool or this is a TQM tool and although we are working on a Lean Project in preparation for a series of kaizen events?
    Do I need to know the inventor of a tool or the current category (today’s marketing scheme) to effectively use it?  No.  Does the current category that we find it in alter its usability or effectiveness?  No.  Hence, I don’t care about that.  Just tell me (and them) what it helps me with, how we correctly use it, and how to effectivley use the darn thing.  Then let us have at it.
    Case in point, in the early 1990’s within the Navy’s TQL/TQM program there was a nice tool that helps us understand a process.  Today folks stress it as a very important Lean tool.  In actualiy the tool didn’t change and it is still a good tool (when used properly), so do I care that ~15 years ago it was a TQM tool and today it is a Lean tool?  Not realy.  The tool: a Value Stream Map.
    I hope that this makes sense…

    0
    #145917

    Barry,
    Yes, that’s right it was business as usual ..at least until 1990.
    In 1989, the training department at Ed Bluestein still taught Advanced Diagnositc and Planned Experimentation, which were both based on Shainin’s collection of tools. I’m not claiming these were the only tools we used though because by that time both MOS 8 and MOS 11 used Taguchi Methods, Mutivariate Methods, and anything that helped reduce variation and improve yield.
    The only interaction we had with Bill Smith in Austin was when he visited the new photolithography area in MOS 3 in about 1985, which received wide attention throughout Moto. I later had a discussion with someone who worked for Bill about setting design tolerances at +/- 6 sgima, instead of +/- 3 sigma, because he wanted to ‘confirm’ it made sense. Anyone who has worked for a Japanese company would recognise this trait of ‘confirm, confirm, ad infinitum.
    I’m not sure of the source for the discussion – my understanding is Bob Galvin credited Bill Smith – but Bob Galvin was fully aware of the Japanese process control indices Cp, and Cpk, because MOS 3 was the first facility to achieve this level of performance in any process. The fact that it was photolithograhy really made people sit up and take noticed. This work was published in Semiconductor International in May 1987. The reason I know is because my colleague’s father was a close friend of Bob, and he even wrote a letter to Mark to congratulate him.
    Regards,
    Andy

    0
    #145959

    Marlon Brando
    Participant

    Thank  You.I  have  read  the  article.But  still  believe  that  a  man  with  your unique type of  deep  knowledge should  be able  to prepare  a  book.good  luck 

    0
    #145966

    jtomac01
    Participant

    Sorry, not much time today.
    Obstacles – Change happens very quickly, pursue to understand 1st then gain confidence, then implement.  Pre-work, for operational (shop floor) very easy & usually doesn’t take much effort.  Support staff can also be a huge obstacle, always get buy-in on what-ifs the week or so before (e.g. when thinkng you may need to move a lathe with really tight tolerances, make sure you have the tools & onsite knowledge to do it or you can be down for awhile).  For transactional events, a lot of data crunching, can take up to 4 weeks or so.  Also data availability can hold up a transactional or engineering development one fairly quickly.
    Quick Wins – have to take the approach that you will make mistakes, important thing is to try something 1st (either simulate or actually effect the change).  Generally, once a floor or team sees this they respond accordingly.
    More later.

    0
    #145967

    Torrance
    Participant

    I find the DMAIC methodology can be used for issues that need immediate fix, but also for projects lasting 6 months. Admittedly though, the level of necessary detail can be different.
    Take example 1:
    Issue found during production of high volume parts.
    Define – what is the issue, where is the symptom found…., Measure – how many are affected, Analyse – what are the potential causes and how would we fix these, Improve – implement the fix(es), Control.
    Not forgetting the obvious containment step that is so important in production
    These steps can be taken quickly – as is required during high volume prod. – it doesn’t have to be labelled as an official DMAIC project etc – but the steps are important to make sure we dont jump to conclusions on the fix, and follow issues using a consistent approach.
    Example 2:
    To improve the productivity of a specific line – requires much more detailed DMAIC approach.
    Steps are the same for both examples – but the required detail and resource required is very different.
    My opinion is that SS concepts / tools can be used for quick wins as well as longer term projects – in fact they must to maintain our credibility.
     

    0
    #145968

    Tony Bo
    Member

    I agree somewhat to the director’s comments, however in that particular situation….where you have teams “reacting” to problems from the managers who are screaming the loudest (about problems that may not necessarily be the right problems to fix at the moment)…..that is an indication of poorly implemented SS program….as well as lack of project prioritization/ selection criteria in that business.  

    0
    #145970

    Barry U
    Participant

    Very interesting.  So if Bill Smith was responsible for setting design tolerances at +/-6, where and when did the +/-1.5 sigma first come in … Dr Harry I suppose, rather than Bill Smith ?

    0
    #145974

    pappas
    Participant

    All good points.
    However, let’s not forget that continuous improvement does NOT always mean MANAGEMENT chartering projects.  It also involves people on the manufacturing floor empowered to problem solve and improve their processes on their own.  Some may argue that this isn’t six sigma.  Well, it’s how we define it.   DMAIC is indead a great methodology to frame a problem and to solve it.  It may take months.  It may take hours.  My opinion is that we take the best things from “siz sigma”:  the tools, DMAIC methodology, etc and take what we need.  I use DMAIC in the blitzes I facilitate (ex. standard work, etc).  I find it helpful, so I use it.   I also find it helpful when solving more complex engineering problems.  So I use it here as well.  I hope my people don’t wait until projects are chartered for them to use DMAIC.  It’s a thought-process that applies in so many way.  Just my two cents worth.

    0
    #145975

    Marlon Brando
    Participant

    Just  Great.I’m glad  to  read  your  brief  description.Waiting for more  details.Having  persons like  you  in this  forum can  really  add  value.
    Thanks.

    0
    #145981

    Barry,
    I’m not sure where it came from and I don’t know if Bill Smith supported it  .. perhaps Praveen Gupta knows …
    Personally, I’ve never bothered with it – neither did any of my colleagues at Motorola, or any Japanese engineer I’ve ever met.
    What’s interesting though – if you perform a number of simulations in Excel, and compare the estimate of sigma from a single subgroup of n = 30, with that of multiple subgroups of n = 3, g =10; you’ll find a difference, and the difference is about 1.7 sigma.
    This led me to surmise Dr. Harry became confused with the Motorola practice of assessing equipment for purchase based on a n = 30 ‘head-to-head.’ This would be quite difference to a full ‘characterisation’ and determination of process capability, which would be  based on at least 30 subgroups.
    If you agree with my conclusions, there is no need to apply a shift :-)
    Cheers,
    Andy

    0
    #145995

    Barry U
    Participant

    Andy, 
    I had a play with Excel and found I could get almost any average difference you want by increasing and decreasing the number of points, from the 30 you suggest. This is what you’d expect … you need more points for a better calculation of sigma.
    Forgive me if I seem bit slow at times, but you said in your previous post:  “I later had a discussion with someone who worked for Bill about setting design tolerances at +/- 6 sgima, instead of +/- 3 sigma”.  Why had you set design tolerances at the same level as control limits, prior to Bill Smith ?
    This seems a bit strange ?

    0
    #146011

    Barry,
    Yes, I agree. The difference observed after a large number of simulations seems only to depend on the subgroup size – and the larger the subgroup size, the smaller the difference. The reason for focussing on n = 30 is for ‘historical reasons,’ and because it is the ‘Reigle benchmark.’
    If you accept n = 30 provides a 95% confidence, there appears to be an anomoly in choosing an estimate of sigma based on this small sample size, to calculate a set of control limits, which as I’m sure you know ought to represent 99.97% confidence intervals.
    The issue of tolerances came up because everyone recognised we had to reduce variation after we studied all our processes on Multi-vari charts. It seemed obvious the only way to achieve the economic limit of improvement, Cp = 2, was by reducing variation, and increasing the width of tolerances.
    (Earlier, we had justified increasing tolerances simply on the basis of the argument …  if you can’t measure a tolerance reliably; you can increase the tolerance to what can be meassured reliably without impacting the probabiity of a correct decision.)
    At that time, design engineers were put under a lot of pressure to design circuits with a high first time yield, but at the same time with the smallest number of transistors, and the implication of this appeared to be tight tolerances.
    By relaxing to demand for small circuit sizes, and therefore, a smaller number of candidates on each wafer, the design enigineers could relax tolerance requirements, and this is what they did.
    (Sometimes, more of less, is better than than less of more.)
    Cheers,
    Andy
     

    0
    #146016

    Barry U
    Participant

    Andy,
    1. The difference depends on both the subgroup size and the number of samples. If I used Excel to look at a practical typical situation of 5 samples an hour for a week, there’s no real difference at all.
    If I only looked at the first 6 hour’s data (30 points), there would obviously be errors in any estimate of sigma. I can’t understand why you suggest only 30 points should be used.
    What are the “historical reasons” and what is a “Reigle benchmark” ?
    2. I’m just an enthusiastic amateur but from all I’ve read, Shewhart Chart control limits do not relate to confidence intervals.  Shewhart stressed this quite strongly.
    3. Picking a Cp=2 sounds fine but you said in your previous post that you had been using  Cp=1 at Motorola (design tolerance of +/-3 sigma).  Why was this ?
    4. I don’t quite understand what you mean by fewer transistors on a wafer required tighter tolerances.  I would have thought it would be the other way around ?
     

    0
    #146018

    Barry,
    What I meant was both n = 30, and (n=3, g=10) have the same number of samples. If you simulate both cases using N(0,1) you’ll find differences, and as you increase the number of samples, the differences become smaller – as you say.
    The historical use of n= 30 is due to Dorian Shainin. I believe he was correct when he stated you can estimate sigma to within 95% confidence using n = 30. (In other words, the relationship between the estimate of sigma and ‘n’ becomes asymptotic around n = 30.)
    Regarding Shewhart, I have heard about his economic sample size in the past, and more recently from someone in the forum. But, I still think the control limits correspond to confidence intervals if entropy is taken into account. Why else would there be a difference – over many, many simulations – between the estimates of sigma for a single subgroup, or multiple subgroup, for the same sample size?
    Setting tolerance limits to +/- 3 sigma was pretty standard throughout most industries during  the 70’s and 80’s. I don’t know why? I first heard the argument that broke its back in the 80’s, when I worked for General Instruments – the rolled yield argument – which I obviously related when I joined Moto in 1984.
    Semiconductor circuits use standard layout rules and processes. In my time, we had 1 micron design rules and 0.8 micron design rules. These days it’s around 0.25 micron!!!
    Anyway, a typically microprocessor circuit would have about 1 million ‘standard  layout,’ 1 micron transistors, and the amount of silicon ‘real estate’ needed would be proportional to the number of transistors. This means you can only put a finite number of circuits, or die, on a silicon wafer.
    For any given product wafer yield, there are two ways to increase the yield – put more die on a wafer keeping the number of random and systematic defect densities the same, or increase the number of circuits by using fewer transistors. In other words, fewer transistors make smaller circuits! I can even remember a product manager gloating once how small hsi circuits were compared to Hitachi’s. (Ignorance always gloats!)
    What they didn’t realize is the use of more transistors does not necessarily increase the opportunties of defects, it can reduce it, by for example, making a sub-circuit less sensitive to electrical deviation.
    Cheers,
    Andy
     

    0
    #146027

    Marlon Brando
    Participant

    Cp=1 is too low??

    0
    #146064

    Barry U
    Participant

    1)  N=30 is new to me, so I’ve done a few hours research. As my daddy used to say to me, don’t do anything without understanding what you are doing.

    Dorian Shainin used n=30 in a DOE example. He makes no recommendation about using n=30.
    Several hundred Excel runs with n=30, sigma=1.0, gave sample SD’s ranging from .64 to 1.41. A lot more samples than 30 are needed to approach the true value of 1.0. Doubling the number of points cuts the difference by about a third. 120 points halves it. 30 is certainly nowhere near an asymptote.
    I found references claiming n=30 relates to this example of the Central Limit Theorem, for n=2,4,8,16,32 http://www.statisticalengineering.com/central_limit_theorem.htm  As expected, it shows the SD decreasing as subgroup size increase. It makes no recommendation about using n=30.
    A subgroup of 30 is obviously way too big to be practical.
    A control chart with 6 groups of 5 points is less useful than a chart of some duration.
    Perhaps someone can help us here on the significance of n=30 ?
    2)  You are correct. Shewhart said 3.0 control limits “seem to be an acceptable economic value” . He mentions the Central Limit Theorem but did not use it in his discussions of Shewhart Charts. He said that it did not matter what sort of distribution a process had “We are not concerned with the functional form”.
    In my researching I stumbled across an excellent description of this: http://www.stat.ohio-state.edu/~jas/stat600601/notebook/chapter9.pdf
    In relation to your comment on entropy, I also stumbled on a paper that compares the probability approach that you suggest with Shewhart’s pragmatic approach. This paper is not as readable as the one above, but his conclusion is in favor of Shewhart.
     http://www.asq.org/pub/jqt/past/vol32_issue4/qtec-341.pdf
    3)  Your comments about Cp=1 are interesting. From what I’ve read, I thought that Cp=1.33 was popular.
    4)  Your comments about the number of dies on a wafer are also interesting. Is this mainly how Cps were improved ?

    0
    #146065

    Barry U
    Participant

    1) N=30 is new to me, so I’ve done a few hours research. As my daddy used to say to me, don’t do anything without understanding what you are doing.

    Dorian Shainin used n=30 in a DOE example. He makes no recommendation about using n=30.
    Several hundred Excel runs with n=30, sigma=1.0, gave sample SD’s ranging from .64 to 1.41. A lot more samples than 30 are needed to approach the true value of 1.0. Doubling the number of points cuts the difference by about a third. 120 points halves it. 30 is certainly nowhere near an asymptote.
    I found references claiming n=30 relates to this example of the Central Limit Theorem, for n=2,4,8,16,32 http://www.statisticalengineering.com/central_limit_theorem.htm As expected, it shows the SD decreasing as subgroup size increase. It makes no recommendation about using n=30.
    A subgroup of 30 is obviously way too big to be practical.
    A control chart with 6 groups of 5 points is less useful than a chart of some duration.
    Perhaps someone can help us here on the significance of n=30 ?
    2) You are correct. Shewhart said 3.0 control limits “seem to be an acceptable economic value” . He mentions the Central Limit Theorem but did not use it in his discussions of Shewhart Charts. He said that it did not matter what sort of distribution a process had “We are not concerned with the functional form”.
    In my researching I stumbled across an excellent description of this:
    http://www.stat.ohio-state.edu/~jas/stat600601/notebook/chapter9.pdf
    In relation to your comment on entropy, I also stumbled on a paper that compares the probability approach that you suggest with Shewhart’s pragmatic approach. This paper is not as readable as the one above, but his conclusion is in favor of Shewhart. http://www.asq.org/pub/jqt/past/vol32_issue4/qtec-341.pdf
    3) Your comments about Cp=1 are interesting. From what I’ve read, I thought that Cp=1.33 was popular.
    4) Your comments about the number of dies on a wafer is also interesting. Is this mainly how Cps were improved ?

    0
    #146145

    Barry U
    Participant

    1)  N=30 is new to me, so I’ve done a few hours research. As my daddy used to say to me, don’t do anything without understanding what you are doing.

    Dorian Shainin used n=30 in a DOE example. He makes no recommendation about using n=30.
    Several hundred Excel runs with n=30, sigma=1.0, gave sample SD’s ranging from .64 to 1.41. A lot more samples than 30 are needed to approach the true value of 1.0. Doubling the number of points cuts the difference by about a third. 120 points halves it. 30 is certainly nowhere near an asymptote.
    I found references claiming n=30 relates to this example of the Central Limit Theorem, for n=2,4,8,16,32 http://www.statisticalengineering.com/central_limit_theorem.htm 
    As expected, it shows the SD decreasing as subgroup size increase. It makes no recommendation about using n=30.
    A subgroup of 30 is obviously way too big to be practical.
    A control chart with 6 groups of 5 points is less useful than a chart of some duration.
    Perhaps someone can help us here on the significance of n=30 ?

    0
    #146147

    Barry U
    Participant

    2)  You are correct. Shewhart said 3.0 control limits “seem to be an acceptable economic value” . He mentions the Central Limit Theorem but did not use it in his discussions of Shewhart Charts. He said that it did not matter what sort of distribution a process had “We are not concerned with the functional form”.
    In my researching I stumbled across an excellent description of this: http://www.stat.ohio-state.edu/~jas/stat600601/notebook/chapter9.pdf
    In relation to your comment on entropy, I also stumbled on a paper that compares the probability approach that you suggest with Shewhart’s pragmatic approach. This paper is not as readable as the one above, but his conclusion is in favor of Shewhart.
     http://www.asq.org/pub/jqt/past/vol32_issue4/qtec-341.pdf
    3)  Your comments about Cp=1 are interesting. From what I’ve read, I thought that Cp=1.33 was popular.
    4)  Your comments about the number of dies on a wafer are also interesting. Is this mainly how Cps were improved ?

    0
    #146148

    Barry U
    Participant

    Is there a limit to the length of postings here ?
    My last posting didn’t appear on the overall listing but it does appear on the topic listing.

    0
    #146156

    Barry,
    I didn’t claim Dorian Shainin used n = 30 in a DOE. I told you what he taught in a class in 1982 at G. I. in Scotland, as the sample size needed to estimate sigma with a 95% confidence. I know I was there!
    He also claimed if you plot the estimate of sigma as a function of n, the curve became asymtopic at about n = 30.(Remember in those days, people didn’t have Excel or PCs.
    These asssumptions was also used at Motorola in the 80’s – I know because I was there too!
    If you research the sample size for which the simulation of a normal distribution approaches a normal shape, you will find it is about  n = 30. It is available on the internet.
    Now I don’t understand why you raise the points ‘e’ and ‘f,’ becuase I think most people would agree with your conclusions. However, that doesn’t mean Reigle mention it as a ‘standard’ for the 1.5 sigma shift – he did!
    I apologise if I’m now suspicious of your motives, as we’ve had Reigle appearing in various disguises and he’s trying to create a myth!
     

    0
    #146162

    Barry U
    Participant

    Sorry Andy. What I meant was that the DOE thing was the only reference I could find for Dorian Shainin using n = 30.
    The Excel runs I described show that a lot more than 30 points are needed to estimate sigma with 95% confidence. 
    After a lot more playing with Excel and curve fitting, I’ve come up with an equation that holds for n=10 to 100 with less than a 4% error, giving the maximum estimate for sigma with a population sigma of 1.0 :
    Maximum estimate for sigma = 3.03 * n ^-0.204
    A similar one could be generated for the minimum estimate of sigma – maybe a way to while away some more time  ;)
    If you plot this curve you will see that there is nothing special about n=30 !!  If anything, it doesn’t level off till at least n>100.
    Do you have any more information on how Dorian cam up with his n=30 ?  I just can’t see how he could arrive at such a conclusion.
    I’m a bit lost on the comment about Reigle.  I assume you are saying that the myth that he is trying to create is that n=30 is responsible for the 1.5.  If so, the above discussion does not support his myth.
    There is also a second part to my last post … for some reason it didn’t appear on the overall list.
     

    0
    #146163

    Barry U
    Participant

    I’ll have to admit I did “cheat” a bit in deriving the formula.  After hundreds of Excel runs, I then realised that I could use Chi-square tables to compare variances.  I used the tables in Excel to generate a list of errors in estimating sigma at 95% confidence, for values of n from 1 to 100.  This was followed by curve fitting to get the result.

    0
    #146175

    Barry,
    No problem .. I replied late last night after an exasperating day!
    Concerning Dorian’s n = 30.
    My impression is he used a normal ‘hat’ and drew random numbers to estimate sigma for several subgroup sizes. I don’t know how many times he simulated each subgroup because Dorian kept a lot of information to himself, only offering an occasional hint. He also published little.
    If I felt inclined to investigate it further, I should plot estimates of sigma for a given subgroup size, as a function of the subgroup size. I’d expect to see a curve with a lot of scatter for small subgroup sizes, with scatter steadily decreasing for larger subgroup sizes, like a funnel, with the spount pointing toward sigma = 1. I would then read off the subgroup size where the scatter falls within the range (0.95, 1.05)
     I suspect it will be close to n = 30, but I may be wrong!
    I’d  have to be pretty determined to try to work this out matematically from first principles :-)
    From the numbers you gave, I suspect you ran about 500 simulations. If you run about 1500, I think you’ll find the max. value for a subgroup size = 30 samples is closer to 1.7.
    When I studied this problem it was from the perspective of process capability and in rebuttle to Reigle’s claim that the 1.5 shift was due to a difference between a short- term and a long-term estimate of sigma.
    My reasoning was if I set up control chart limits based on a subgroup size of 30, then in some cases the mean will be able to drift, since as you’ve pointed out n= 30 is too small to achieve a 99.97% confidence. However, if I used control chart limits based on n=3, g=10, in some cases the mean would only be able to drift by as much as 0.6.
    Irrespective of these results, no self respecting process engineer would use either of these sample sizes to calculate control limits, which is why I reasoned Dr. Harry became confused with the Motorola equipment ‘head-to-head’, which used a sample size of n = 30.
    I suspect the myth Reigle tried to create was that Motorola’s Six Sigma success was maily due to the efforts of one man – Dr. Harry.
    When I first came across Dr. Harry’s Filtration paper in 1986, or so, he was trying to peddle convnetional statistics to a busy community of process engineers who had no processor capability whatsoever. This is why we started off with Dorian’s simple pen and paper methods!
    Since I was well aware that most engineers and managers tossed Dr. Harry’a Filtration paper in the trash, the fact that Reigle was able to put  a copy up a website, and other Motorola confidential information, told me who he really is, and what he’s trying to do!
    Just my opinion!
    Cheers,
    Andy
     

    0
    #146184

    Barry U
    Participant

    2)  You are correct. Shewhart said 3.0 control limits “seem to be an acceptable economic value” . He mentions the Central Limit Theorem but did not use it in his discussions of Shewhart Charts. He said that it did not matter what sort of distribution a process had “We are not concerned with the functional form”.
    In my researching I stumbled across an excellent description of this: http://www.stat.ohio-state.edu/~jas/stat600601/notebook/chapter9.pdf

    0
    #146185

    Barry U
    Participant

    In relation to your comment on entropy, I also stumbled on a paper that compares the probability approach that you suggest with Shewhart’s pragmatic approach. This paper is not as readable as the one above, but his conclusion is in favor of Shewhart.
     http://www.asq.org/pub/jqt/past/vol32_issue4/qtec-341.pdf
    3)  Your comments about Cp=1 are interesting. From what I’ve read, I thought that Cp=1.33 was popular.
    4)  Your comments about the number of dies on a wafer are also interesting. Is this mainly how Cps were improved ?

    0
    #146187

    Barry U
    Participant

    Andy,
    You will be very surprised at this, but to ensure a maximum variation on the estimate of sigma of 1.05, at a 95% confidence, where the “true” value of sigma is 1.0 ,  
    n = 2,230
    This gets rather impractical to verify using Excel, but Chi square tables give a quicker result.  As I said, I did run hundreds of tests using Excel with smaller values of n.
    At n=100 the maximum estimate for sigma is 1.18, a long way from 1.05.  If you plot the equation I derived, you will see how slowly it approaches 1.0.
     

    0
    #146188

    Settiong tolerance limits to +/- 3 sigma was common practice ..
    I thought the Motorola target prior to 1990 was Cpk = 1.33

    No, the main way Cp’s were improved using two methods:
    1. If a measurement system couldn’t measure an in-process  parameter with a 95% confidence, the parameter was widened so it could; irrespective of the designer’s requirements. To my knowledge this had no effect at Final Test performed at another site, and the reason was designers generally liked tight tolerances, sometimes even tighter than +/- 3 sigma.
    2 Everyone focussed on reducing variation. This was accomplished by finding sources of variation – for example, at the front of a furnace tube, or at the edge of a wafer using Multi-vari charts. Temporal variation was not a major concern – only after a tube change and we got a handle on that.
    My understanding is Phoenix took a similar approach.
     
     

    0
    #146189

    Barry,
    I guess I’m only half-surprised because I have used some sample size calculators in the past, and the sample sizes always seem extraordinarily large.
    Perhaps I misunderstood what he said .. but there is ample evidence for the use of n = 30 within the realms of Six Sigma.
    In my simulations, I only looked for evidence of a shift based on the Reigle standard, and not for the sample size to estimate sigma with 95%.
    I’ll give it more thought when I have more time.
    Cheers,
    Andy

    0
    #146190

    Barry U
    Participant

    Andy,
    Some more research and I found what your good mate Reigle has to say about it:
    ” The sample size of N = 30 results from n = 5 and g = 6, or N = ng = 30. You will find that 1 / sqrt(N) has a “point of diminishing return” at about 30. You will also see that 1 / sqrt(N) is the value s / sqrt(N) for the case NID(0,1). Just plot it out and you will see. “
    I thought he might be joking but his post does seem to be serious. I hope this is not what you are using as a basis for n=30   ?
    Do you or anyone else have any references to the “ample evidence for the use of n = 30 ”  ?   I can’t find any at all apart from what I’ve already discussed.  I’d love to be surprised !
     
     
     

    0
    #146191

    Barry,

    No ..  I’ve made this point on several occasions previously, n = 30 is too small to calculate control limts or estimate process capability.
     As I explained, the only reason I looked at n=3, g=10 was to rebutt Reigle’s assertion about the 1.5 sigma shift.
    I should also like to point out that before going into “training”, Dr. Harry worked ‘under’ Dorian Shainin; so I would expect him to use a similar argument. To my mind my use of the expression  ‘asymtopic’ inferred a ‘diminishing return.’
    On a slightly different tack, I wonder if anyone has considered using a recursive relationship to estimate sample size, such as the method due to Wald. In other words, simulate a few numbers, input the value of sigma into the sample size formula , re-calculate the sample size, and do this for a number of subgroups.
    I typcially use 30 subgroups of n = 5 for Shewhart Charts.
    Andy
     

    0
    #146246

    mand
    Member

    You don’t get it do you.  Reigle is a CON MAN.  Try this quote from him on the n=30 bovine dung
    “1) Declare a random normal model such that NID(100,10). 2) Declare this model to possess an infinite degrees of freedom. 3) Call this distribution the “theoretical model.” “
    Now who is he trying to fool ?  Perhaps the sheep out there would like to raise a paw and baaaaaaaaaaaaaaa  ?

    0
    #146259

    Barry U
    Participant

    Andy,
    I assume you are saying that you are not aware of any reason for choosing  n=30 ? 
    Might I suggest that Harry and Reigle have chosen n=30 as a way to try to justify the +/-1.5 sigma  ?  
    I will explain, without the gobbledegook they have used to hide the truth.
    Chi square tables are used to give a means of testing the reliability of the standard deviation (SD) of a sample, as an estimate of sigma for a population.  For example, if we use Excel to build 30 normally distributed points with a population mean of 0 and sigma of 1.0, then use Excel to measure the SD, it will never be 1.0
    Chi square tables tell us the likelihood that that the sigma will fall within a certain range of the sample SD.  For a sample SD=1 :
    1 / (Chisq.95 / (n-1))  < sigma^2 < 1 / (Chisq.05 / (n-1))
    We can choose whatever value we like for n, but in the special case of n=30, and a confidence interval of 99.5% ( rather than the more common 95%)  we get :
    sigma^2  <  29 / 13.12
    and   sigma^2  > 29 / 52.3356
    or       0.74 < sigma < 1.49
    Harry then takes the right hand side value, multiples by 3 for 3 sigma control limits, subtracts 3 and gets his  “1.5” !!
    If we use more reasonable values, say a control chart with 30 sets of 5 points, at a 95% confidence, we get
    .91 < sigma < 1.11
    or a factor of  “0.33” instead of “1.5”  !!
    Of course neither of these numbers have any relevance at all because control limits are not probability limits !
    Can readers follow this ?

    0
    #146262

    Barry U
    Participant

    ” …the reason was designers generally liked tight tolerances, sometimes even tighter than +/- 3 sigma.”
    I don’t understand. Why was this ?  Wouldn’t this have adversely effected yield ?

    0
    #146299

    Barry,
    Thanks for the explanation. I found it most enlightening especially after changing the subject of the sample size equation and plotting the ‘ratio of the sigma error/’versus sample size n. The curve Sqrt(1/n) doesn’t appear to be asymtopic at all …!
    Cheers,
    Andy

    0
    #146302

    Barry,
    I think this is a good example of a wrong target/ metric.
    Silicon circuit design is a little like software design – there is an element of ‘suck it and see it.’ Now, Silicon circuit design might be ‘easier’ today with the availablity of software tools; but I know software does not respond well to ‘make sure it works first time.’ For example, API calls may not  interface with other modules as well as expected.
    (I need to explain the concept of  ‘matching’ a limitation of Six Sigma according to Dr. Harry, not experienced in TPS.)
    For now, let’s consdier the challenge of having a new product introuction in the mid-80s with a high first time yield.  It was extremely difficult – yet it was expected.
    What better excuse than to argue the wafer fab just can’t meet our tolerance requirements. It’s not their fault – it’s the capability of their equipment! :-)
    Cheers,
    Andy

    0
    #146303

    Barry U
    Participant

    Andy,
    It’s interesting that if you look at this thread, no one else has made a comment in the technical discussion.  I feel that most people not only don’t understand the basics, they don’t question it either. 
    If people did ask even very simple questions, everyone would be laughing at the very mention of the nonsensical 3.4 DPMO.
    Most people follow the crowd – at least until people like SammyTexas start talking about their dramatic failures.  By contrast, it is wonderful to come across the rare people like you, who think for themselves.
     

    0
    #146309

    Barry,
    I don’t think it the case. There has been plenty of discussion before, and many others have made similar comments – particularly, Statman, John H, Stan, Gabriel, Darth, Vinny, Phil, Peppe, and many others, too numerous to mention.
    To your credit, you are the only person who has ‘taken the bull by the horns’, so to speak :-)
    Reigle has been the  main (only?)  protagonist.
    I suspect most people are happy to relax and follow your structured arguments and support your approach without feeling the need to intervene.
    As for myself, my last contract was in last May, so I’m not under the same pressure as everyone else.
    Cheers,
    Andy

    0
    #146311

    Barry U
    Participant

    Thanks Andy.

    0
    #146455

    Barry U
    Participant

    Andy, do you have a link to this quote please ?
    I’d be interested to read his statistical criticisms of six sigma.

    0
    #146470

    Barry,
    Here’s the link.
    http://www.lean-service.com/home.asp
    I’d be interested in an exchange of views.
    In my opinion, much of what the guy says has a ‘ring of truth,’ but I’ve learnt that half-truths are often worse than lies.
    Specifically, I do not reckognise the two pillars of the TPS in the methods proposed.
    Take his book: ‘Freedom from Command and Control,’ while I agree with 95% of the content, I disagree with his conclusions. Will it work? Who knows when there are no check and balances?
    What we need despreately in the West  is a ‘harmonize’ between two extremes. This is what TPS does. Therefore, we need a management style that contains both a ‘Laizzez Faire’ style of management and a ‘Command and Control’ style of management. Some would say this is illogical, but it is not provided one style is ‘hidden.’ This was one of the great contributions of Bruce Lee when he exlained the precepts of Oriental ‘Fuzzy Logic.’
    Anyone who has worked for a Japanese Corporation would recognise the ‘Mission Accomplishment Style of Management.’
    ‘This is what we need to achieve – you are now responsible for achieving it. If you have any problems, come back and see me and we can discuss it. If we need some extra help I can arrange it.’
    Believe it or not this used to be a common approach within Motorola before it became ‘topsy-turvy.’
    Cheers,
    Andy
     

    0
    #146515

    Barry U
    Participant

    Andy,
    I’ve come across some of this fellow’s articles previously.  He seems to be a better thinker than most.
    There’s dozens of articles on his site:  http://www.lean-service.com/6.asp   Are there any to which you are specifically referring ?
    His  http://www.lean-service.com/systems.asp  is a good Deming style article on systems thinking.

    0
    #146517

    Barry,
    One of the problems I have with this guy is he’s using the Toyota brand to market his ideas.
    Take his article “On Target To Achieve Nothing – originally published in The Observer
    My first objection is – what has abolishing all targets got to do with TPS? Is this a Tai-ichi Ono philosophy – not to my knowledge? It appears to me to be a crass, cheap marketing ploy to advance oneself.
    It’s a bit like coming up with your own version of Six Sigma and selling it to other companies saying: “This is what Motorola used to save millions of dollars,” when in fact they didn’t..
    My second objection is the proposition itself – that of abolishing all targets. There is no doubt there are too many targets in government circles in the UK – in the national health service, schools, and public services, and many people cheat to appear heroic and achieve promotion. But is that a good reason to abolish all targets? I don’t think so! That seems a bit like abolishing all sales of alcohol because some people become drunk.
    As I mentioned previously, when I contract a builder to build an extension, I don’te tell him how to do it. I just tell him the size, the features I want, and how much I’m willing to pay. That is a target. Trying to distinguish between targets and goals is just semantics.I certainly wouldn’t want a builder who is one of the ‘Fountainhead.’ If he wants to build his own design, let him do it on his own property with his own money!
    Anyone who has run a business will recognise the importance of good fiscal policy; to say otherwise is foolish –  like the fool who has to shout: ‘for whom does the Grail serve’ to save the Fisher King, but can’t remember the words.
    To my mind there ought to be a harmonisation between defining objectives and measurables on the one hand, and a person’s freedom to express themselves on the other. This principle seems to be encapsulated by the ‘Mission Accomplishement’ style of management / leadership.
    In private, at the seminar I attended, the CEO spoke about some people in their client’s firms who ‘just didn’t get it,’ and his anwer to this challenge was ‘just fire them!’ What hypocrites – one the one hand they come out against sales people having to use ‘scripts’ – so that sales staff can relate to customers – on the other hand if there is any dissention they’re fired!
    In conclusion, I think it is important for people to understand the two pillars of the Toyota Prodction System are ‘one-by-one’ confirmation and Just-in-time.
    ‘One-by-one’ confirmation, which leads to single flow  – not demand, as Vangard state.
    Just-in-time leads, which leads to waste (muda) reduction.
    Why it’s necessary to modify TPS for servicing –  I have no idea. I can only conclude they didn’t understand TPS in the first place.
    Cheers,
    Andy
     
     
     
     

    0
    #146521

    Barry U
    Participant

    Andy,
    I agree that the article does not discuss TPS.  I have sent Sneddon a note asking if any of his articles do so.
    The article is a discussion of Deming’s point 10 :
    “Eliminate slogans, exhortations and targets for the work force”.
    Unfortunately Sneddon’s article does not define what he means by “targets”.  What he is talking about are “targets for reduction of variation” or improvement objectives, such as “Zero defects”, “sigma levels”, as well as targets in the form of numerical quotas.
    Deming gives a detailed discussion of the negative effects of such targets in “Out of the Crisis”.
     

    0
    #146523

    Barry,
    I don’t want to contradict Deming, but I can tell you an anecdote about Hitachi.
    When we first went to see them in 1985, we decided to ask them the ‘catch question’ how do you determined the number of particles allowed on a piece of equiepment – thinking they had some clever way of relating it to rolled yield.
    The Japanese engineer told us since they knew the selling price of the product so they could work backwards and budget contamination for each piece of equipment in the line. We found this quite shocking as marketing competes on price, and implies contamination levels might have be reduced in the future.
    It changed they way we thought about out business.
    Cheers,
    Andy

    0
Viewing 90 posts - 1 through 90 (of 90 total)

The forum ‘General’ is closed to new topics and replies.