iSixSigma

Fontanilla

Forum Replies Created

Forum Replies Created

Viewing 100 posts - 1 through 100 (of 167 total)
  • Author
    Posts
  • #190086

    Fontanilla
    Participant

    There is only 1 constraint at a time. Once you resolve a contraint, there will be a new constraint. But there is only 1 contraint restricting system performance at a given time.

    0
    #189956

    Fontanilla
    Participant

    I think the tutor is wrong. The value of defects and opportunities is not an intellectual exercise but needs to point to real-world solutions. The “art” of dpmo is not defect counting, that’s the easy part, it’s the opportunities.

    Assuming the goal here is to focus limited resources on problems, the opportunities for “dints” in the bodywork should be a count of all the manufacturing operations where those defects could occur (common cause) and an understanding of where in the process the defects occurred. Yes, this is really hard, but it’s how it’s supposed to work. To count bodywork as a single opportunity for defects is ludicrous unless the body is purchased, painted and whole and is installed in a single operation.

    The danger of mapping opportunities to the factory floor is the risk of establishing a ludicrously high number of opportunities so that 3 dings looks like “six-sigma quality levels”. Defects as perceived by the customer as negative differentiators of quality are unacceptable, regardless of the guality level. Don’t tell me I’m buying a six-sigma quality car with dings and tell me it’s my perception problem!

    0
    #187103

    Fontanilla
    Participant

    I would be looking for a way to get more frequent measurements of performance – is there a reason the data is only available quarterly?
    Sometimes implementing a more effective measurement system is a key deliverable of a Six Sigma project (before you can even improve the process).
    I’d also consider whether 100% quality rate is really necessary.  Accepting absolutely no defects is expensive – and may not necessarily be required – depending on the process. 
    Having said that, here are the sample size calculators I developed and use for these calculations – http://www.six-sigma-analyst.com/calculators.php

    0
    #186625

    Fontanilla
    Participant

    There is some free online training here:http://www.six-sigma-analyst.comAlso consider the American Society for Quality (ASQ) for certification. It is very well respected internationally.

    0
    #186624

    Fontanilla
    Participant

    There are some sample size calculators available here that may help:http://www.six-sigma-analyst.com/calculators.php

    0
    #186622

    Fontanilla
    Participant

    Perhaps considering historical error rates will help to explain this one. If you look at the history of the process, does the previous data allow you to see the performance over a bigger sample? That may help them to think it through.

    0
    #185601

    Fontanilla
    Participant

    Hi Bower,
    I’d love to see that spreadsheet for d2 calculations too.  Thanks.
    [email protected]

    0
    #59588

    Fontanilla
    Participant

    I have a similar issue. I’m not currently employed and am taking a Green belt course for general knowledge. I need a project to work on. I’m looking into pro bono work for my town or non-profits. I live in Fairfield CT. Any ideas?

    0
    #59585

    Fontanilla
    Participant

    Please send me a copy of the card drop game also…
    [email protected] you…

    0
    #183077

    Fontanilla
    Participant

    Jsev607, Gary’s right.  You can set up the limits in advance.  I’ve done it too, and it does show you very quickly if you have a shot at success, or need to stop and revise things, before you make a bunch of scrap.  It works.  Calculate based on your customer requirements, your known process performance (or bench marked performance if a new process), and set your reaction plan based on the limits you derive.  When the first point falls outside of the limits, better start to pay close attention.
    You’re right too.  You need to look at the chart of the data and look for special cause indicators in the shape of the histograms, run charts, etc.  The theoretical limits are a starting point for you to quickly know if you have a problem, but they don’t substitute for the true limits created by the process itself.
    Good work guys!

    0
    #183076

    Fontanilla
    Participant

    I totally disagree with your statement, “I won’t care if a part takes 1 run today and the next part takes runs, as long as a part is getting better with each run.”  That, my friend, is a wasteful process.  I’ve seen projects where people thought exactly the same thing.  In one case, we had product that sometimes “came in” in 2 tests, sometimes in 60.  By carefully rooting out the key inputs that were varying (widely) in the process, we were able to take this down to essentially first pass, good part (although the really smart guys in sales and marketing told the customer we would ensure 3 “break in” cycles before testing so now we are locked into 4 cycles.  Geniuses.)
    You need to follow your DMAIC steps, paying close attention under “Analyze” and find which inputs are varying causing your output to vary.  Which X’s drive your Y?  There is a very scientific reason why one piece takes 1 pass and others take multiples.  Understand the physics, understand the mechanisms, and you will reduce your variation.
    Oh and one other thing, I’ve assumed your measurement system passes muster.  If not, don’t neglect “M.”
    Good luck.

    0
    #182641

    Fontanilla
    Participant

    I’m assuming you’ve already created Pareto charts of the various failures, correct?  What’s the most common failure?  When does it occur?  Have you also plotted these failures on a diagram of the product and looked for patterns with respect to the location of the failure?  For example, you find a high occurrence on one area as opposed to another.  Have you asked the same questions while looking at individual stalls and individual assemblers?  Are there patterns there?  It’s been my experience that there is always a pattern.  These things are never random.  When they appear random, you haven’t asked the right question of the process.

    0
    #182156

    Fontanilla
    Participant

    Sounds as though you are trying to answer a textbook question.  Is that the case?  Or, is this a real-world problem?

    0
    #181083

    Fontanilla
    Participant

    They work for me.  I discussed yours and Stan’s contention that .001mm isn’t possible without interferometry or other optical means with one of them yesterday.  He got a good laugh out of it. 

    0
    #181026

    Fontanilla
    Participant

    1) A “mechanical gage” is one that uses a mechanical mechanism to measure the sample, either directly or indirectly.  Often, mechanical gages include a fixture which securely positions the sample, and an indicator which touches the sample.  Indicators may be mechanical or electronic.  They may measure length, runout, roundness, diameter, etc. 
    2) A vernier caliper uses as series of lines scribe on a slide which the user uses to read the measurement of the sample by the position of those lines.  A digital caliper gives a digital reading of the sample.
    3) The mechanical motion of the digital caliper is converted using a number of techniques.  Most common is to use a small encoding device to convert the linear travel into pulses (counts) which are then converted to a digital measurement.
    4)  A digital caliper is a mechanical measurement device.

    0
    #181025

    Fontanilla
    Participant

    To quote your own posting of a few days ago (on a different topic), “You’re wrong.”
    Only I’ll provide some further explaination rather than just point out someone’s error and walk away…
    Interferometry is called for in certain situations.  We do use it here when measuring in the sub-micron level.  It works very well, particularly when measuring the profile of a surface.  It is very expensive!  I sincerely doubt Paula needs to use anything like that for her measurements, given that she wants to look at .01mm tolerancing.  It can be done, but it will cost her.
    I don’t “think” my gage tells me I have .001mm resolution.  I know based on the R&R studies performed on it.  Once again, I’ll make the statement that you and Stan and the others must have never used mechanical gages to measure to this level because we do it every day.  Obviously you think we don’t know what we’re doing but you know, if we were so wrong, our FTQ and warranty performance would quickly tells us.  Those two customer satisfaction indicators are either best in class or near it for most of our products.
    Here are some more references to check out.  We use their equipment daily in our plants.  Check out Marposs.  We’re holding some products to about .003mm using post-processing feedback from Marposs units.  Check out Heidenhain.  We use their glass scales in several gages to measure a lash setting on a bearing, which is required to be within .004 – .010 mm.  Yes, that is 4 to 10 microns.  Gage R&Rs to less than 10%.  Even the floor CMMs are certified down to about .0003mm although resolution on them is actually about .001 – .0015mm.  The gage room CMMs do a little better.
    Maybe you guys are just a little behind the times.  Maybe technology has passed you by.  Probably you need to dust yourselves off and get out there and take a look at what’s available and what you can do with it.

    0
    #180881

    Fontanilla
    Participant

    Chris, I’ll contact you separately regarding your questions.  And yes, I measure on some gages to .001mm, not inches.  I’ve not measured in inches in years.

    0
    #180730

    Fontanilla
    Participant

    In fact, it is you and Andy U who’ve been shown the water but instead, turned your noses upwards and walked away.  You’ve presented nothing to support your claims yet remain steadfast in your assumption that I am somehow wrong in my data and my processes.  I fail to understand how you two could be so stubborn in your refusal to accept that maybe, just maybe, there is a possibility that a mechanical gage can precisely and accurately and repeatably measure .001mm.  Considering that Andy doesn’t know how large a micron really is, distorts the writings of referenced individuals, and then denies the same of being worthy of reference, maybe I shouldn’t be surprised. 
    .01mm is easily measured with mechanical gages.  Take a Mitutoyo micrometer with resolution to .001mm and you’ll see it.  Use one with resolution to .0005mm and you’ll be even better but not quite able to measure microns.  Those devices are just examples of readily available store-bought instruments.  With proper fixturing and cleanliness, you can see down to .001mm.  Digital probes work quite well and aren’t susceptible to electrical noise
    Before you reply with some sort of smart-alec remark, do yourselves and anyone else who may be entertained by this exchange and check out SOLARTRON PROBE #DP/5/S WITH T-CON ASS’Y and METRONICS “GAGE CHEK” DISPLAY UNIT W/(8) DIGITAL INPUTS.  Do some research on this and report your findings here.

    0
    #180671

    Fontanilla
    Participant

    You really are showing how little a person you are.  “Read the entire site”??  Really.  You send a link to one page and then expect the reader to peruse the entire site to find your one reference.  The insinuation that I could have hacked the site and re-written the text is also quite insulting.  Grow up.
    And, maybe you’ve missed the posting where I did explain the use of anvils in our process.

    0
    #180659

    Fontanilla
    Participant

    Nice that you’ve taken George’s words and added your own interpretation to them.  No where in that link is the word, “anvil” used.  That’s really great.  Twist the context to fit your view.  Really big of you. 
     You’ve also demonstrated you don’t know what a micron is.  .002mm (2 microns)  = .000080″ (80 millionths).  NST standards are certified, per George, (and according to the documentation supplied with ours), to +/-.000005″ (5 millionths), which is .000127mm (or 1/10th of one micron).  We use a couple of these to show linearity of the gage plus have one for (near) nominal size, when the gage comes due for recertification.
     

    0
    #180658

    Fontanilla
    Participant

    Who said anything about calipers?  I use digital probes when measuring to that level.  The probes are good down to .0002mm.  Below that, you don’t get anything more.  And yes, I measured in a temperature controlled gage lab the first time I tried it.  First time, GRR came out to 147%!  Found an issue with flatness on the locating surface and corrected it and cleanliness.  Second time, GRR was only 78%.  Doing well, right?  50% reduction.  Found the reason was mostly within-part variation.  Clocked the parts and GRR dropped down to 8%. 
    Tolerance on this particular feature is .003mm. 
    Using a Pratt & Whitney gage (single probe and anvil setup) with a Sylvac 80 measurement display unit on it, repeatability of the probe was .00015mm.  Again, within-part variation was driver behind initial poor GRR.  Clocking the parts results in similar GRR at 8% but is much less sensitive to dirt due to the dirt grooves on the anvil.  We prefer the P&W.
    Too bad all of you nay-sayers don’t seem to know anything about this type of measurement.  You’d do well to open your eyes once in awhile and maybe learn something.  But I see you four guys are well acquainted with each other from your various responses on this forum over the years and seem to feed off of each other.  That’s too bad in that it seems to have limited your ability to grow.
    Well, I have better things to do, like finish running off my lathes, where I’m measuring bore diameters with Comptor gages, that have resolution to .002mm.  With .070mm tolerance, that’s more than enough.  My bore depth gages (custom height gages) read down to .001mm but accuracy is only .01mm.  That’s o.k. given my .200mm tolerance.
    Wish you could come and see it.  You’d think differently, I’m sure.  But no matter.  You guys go back to your, well, whatever close-minded things you do.  I’ll continue to make car parts, some of which go into “Big 3” cars, and some which go into Asian manufacturers’ vehicles.  They are pretty happy with the products. 

    0
    #180564

    Fontanilla
    Participant

    Paula, see the following link for a starting point.
    http://www.mahrfederal.com/index.php?NodeID=8282

    0
    #180563

    Fontanilla
    Participant

    Chad darling, how can you say such a thing without any knowledge of the facts?  Opinion only?  That’s a shame.  Opinioneering is a poor substitute for actual data-driven knowledge of your subject matter.
    Surely if you’ve built soooo many precisely machined components, you must have had good GR&Rs.  Are you saying that you in fact didn’t have good GR&R and that you were “flying blind?”  I doubt it.  Maybe you meant to say that you, in your facilities, used different measurement systems to produce your fine products.
    I know GR&R, trust me.  I’ve been at it for quite some time.

    0
    #180562

    Fontanilla
    Participant

    Exactly my point!  .01mm is child’s play for measurement with even a mechanical indicator.  My original post was to say that Paula could use this means to measure and that optical means, etc. weren’t required.  I merely used the tighter tolerance and more precise measurements to state that we can do better than .01mm.
    .001mm is also achievable but does take more effort.  Yes, debris can throw your measurements.  You need to make sure parts are clean.  But, optical means aren’t required even at that level, depending on the product being measured. 
    The nay-sayers that claim this cannot be done have obviously never attempted nor seen it.  Were they to come into some of our plants, they would see it done every day.  Sadly (for them), they won’t get that opportunity.  Seeing is believing.  I speak from experience on what I see in production.

    0
    #180504

    Fontanilla
    Participant

    Only if she listens to your (bad) advice.

    0
    #180494

    Fontanilla
    Participant

    I know perfectly well the difference between .001mm and .001inches.  All you really smart people who think it can’t be done have never done it.  My GR&R’s come out to around 10-12%.
    What do you think the tolerance on most press-fit bearing bores is?  What do you think the tolerance on fuel injector nozzles is?  Power steering pump vanes?  If I have .006 milimeters total tolerance on a shaft diameter, how closely do you think I have to measure to be able to pass GR&R?  How do you think I do that?  In a lab?  No, on a production floor where I make thousands of these daily.
    Do yourself a favor and when a professional in the business tells you that it can be done, maybe check it out before you show your smallness.  Come out to the factory floor and see what we do.  You might just learn something.

    0
    #180311

    Fontanilla
    Participant

    Laugh all you want.  We do it.

    0
    #180306

    Fontanilla
    Participant

    In fact, I’m at a machine run-off right now and am measuring to .001mm (1 micron).  We routinely hold tolerances in the sub-micron ranges.  Tolerancing gets tighter every year…

    0
    #180193

    Fontanilla
    Participant

    That’s funny!  I work in the automotive industry and we routinely measure (accurately and repeatably) down to .001mm using mechanical gages with electronic indicators.  Our gages are good down to .0001mm.  There are a great number of different probe configurations and electronic indicators from Federal Mahr and others.
    Don’t waste your time with optical methods unless your shape is very complicated.  They take too much time to use.  Instead, look in the Thomas Register for gage companies in southeast Michigan and you’ll find all you need.  Companies like Reef Tool & Gage, Southern Gage, Dependable Gage, Bower, Hanlo, Enmark, and the list goes on and on.
    And, you should think of this as your civic duty to the country.  As you know, Michigan is suffering the worst from the economic slowdown.  Your gage purchases from these excellent companies will help to stimulate the local, state, and national economies.
    Cheers!

    0
    #178832

    Fontanilla
    Participant

    DPMO is rubbish.  Forget it.

    0
    #178722

    Fontanilla
    Participant

    Unless your simulation has noise built into it, you need to run it only once per experimental setup.  With no noise, a (computer) simulation will return the same result every run and thus, you will not get the confidence interval you are looking for.

    0
    #177523

    Fontanilla
    Participant

    Go to Gemba!!

    0
    #177522

    Fontanilla
    Participant

    Don’t you need to have the number of instruments to be fixed in the equation as well?  The level of complexity of the repair might also be necessary.  If it’s a routine service or major rebuild, it will impact productivity.  You might show a tech working a week on a single instrument (major rebuild) vs another knocking out a dozen in a couple of days (routine maintenance).  Or, you might show a tech spending multiple weeks at a customer location ’cause he likes the free food and swag…

    0
    #177304

    Fontanilla
    Participant

    Here’s the link.  It’s VA approved.  It’s an online or campus based university.
    http://www.villanovau.com/Home/Content/VU/Military.aspx

    0
    #176445

    Fontanilla
    Participant

    Typically, in this type of tool trial, you’ll run until failure of either quality, i.e. product quality degrades to an unacceptable level or the tool itself.  Start with a fresh blade and run it until it’s no good, sampling along the way.  Repeat until you have enough samples to understand the shape of the degradation curve.  Set your maximum number of cycles before blade change at some conseratively safe level prior to failure.  You really don’t need to throw much complicated analysis at this.  Putting it into a run chart should be sufficient.
    Don’t get yourself into analysis paralysis.  Keep it simple.  Keep your customer in mind when performing your analysis.  If your customer happens to be the job setter, you’d better give him/her a simple explanation rather than a lot of fancy statistical analysis if you want their buy-in.
    Good luck!

    0
    #176366

    Fontanilla
    Participant

    Hi – This is not a Six Sigma project, this is a consulting project. I would start by benchmarking against other restaurants. Is everyone having the same problem?I would look at my cost structure, can some fixed costs be converted to variable costs?I would look at which menu items are more profitable. Can less profitable items and services be eliminated?Finally, look at your customers. Are they the “right” customers? Are they profitable? Make sure you get good customers, and discourage customers that cost a lot to serve (they only eat cheap items, no drinks or desserts, etc)Good luck

    0
    #176225

    Fontanilla
    Participant

    The previous postings are spot-on with respect to management involvement and commitment.  You should be hearing “pick me” from the respective plant managers.  Those who are silent can be considered “passive resistors” to change.
    You also want to ensure you don’t chose plants whose operations are in disarray.  While this “low-hanging fruit” may be easily picked (without applying any fancy tools!), you want to look for operations exhibiting stability.  Without a stable operation, you will be constantly changing your conclusions about the root causes and will then be constantly changing your thoughts and actions with your lean activities.  What you want to look for is a stable, mature plant, with an actively supportive plant manager and staff.
    This last point is one that can really get you into trouble though.  Sometimes, the plant manager is “supportive” but not engaged and it’s the staff who really controls the initiative.  Watch out for the “frozen middle” i.e. middle managers who are either passively or actively resisting the initiative.  These people fear change and will do whatever they can to protect the status quo.  Ensure your buy-in includes not only the plant manager, but the staff as well.
    Good luck!

    0
    #176012

    Fontanilla
    Participant

    Give me $1500 and I’ll certify you with an “Industry Standard” lean certification.  It’ll be worth as much as the cert’s you get from the other sources mentioned and will come a whole lot quicker!

    0
    #175759

    Fontanilla
    Participant

    Shawn,
    With exception of Tolerance Ellipse, ALL of the “Shainin” tools were and are “Six Sigma” first!  Shainin merely re-stated some of them and later, with the help of his sons, put clever packaging on the rest.  For example, the “trees” are nothing more than convergence or fault tree diagrams.  Isoplot is nothing more than a Scatter Plot.  B vs C tests are Tukey tests.  Tolerance Parallelogram is a Regression analysis.  Component swapping is a DOE.  I’ve actually written a training manual that incorporates the so-called “Shainin” tools, without violating their many intellectual property claims.  See Wheeler’s and Bhote’s books.  Montgomery and Box also have many tools you’ll recognize in the “Shainin” box.
    Your consultant DOES indeed have an agenda i.e. to make maximum profits for Shainin LLC.  And, once they are in, they can be like a virus that you just can’t shake.  They will tell you Six Sigma is “X to Y”, that is, supposes causes then tests for them.  This is FALSE.  Six Sigma, done properly, follows exactly the “Y to X” that Shainin touts.  That is, Six Sigma leverages the clues found in the data to converge on the critical causes, or X’s, in the process.  Be very skeptical of their claims to the contrary.
    That said, Shainin is not all bad.  They do indeed help you to use your Six Sigma tools more efficiently.  Just keep in mind that “Six Sigma” tools are the basis for the “Shainin” tools.  They bring very little new to the table, technically.  What they bring is in organization.
    Good luck! 

    0
    #175264

    Fontanilla
    Participant

    Back to the original question!  When a error isn’t detected, I assume that down the line, somewhere in the process, that error is detected.  If that is correct, you can gain an understand of the accuracy of your error-checker by comparing what that person has found vs what that person has missed.
    You’d do well to heed Mr. Butler’s advice and stratify your errors found data within the spreadsheets and across the agents.  Same for errors missed (and found later).
    Good luck!

    0
    #173358

    Fontanilla
    Participant

    It’s a mistake to try to apply correlation in this case.  ANOVA might be somewhat applicable but just barely.  You have ordinal data you’re trying to analyze with “normal” techniques.  The two shouldn’t be mixed in such a way.  If you feel you must put numbers to these, you would be better off using non-parametric analysis.  But the best way to analyze would be to simply chart the responses in a pie chart or bar chart.  You still need to do the “heavy lifting” when it comes to understanding why you receive scores in the lower categories to understand what is going wrong (as well as in the higher categories to understand what you are doing right).  Playing with ANOVA or correlation won’t help you in that respect.

    0
    #171346

    Fontanilla
    Participant

    I’m sure there are PLENTY of ideas.  In fact, I believe this question was answered within the last week or two. 

    0
    #171345

    Fontanilla
    Participant

    Right.  What are you evaluating for?  You need to better define what it is you want to learn from the process.  This will help you determine how to sample (frequency, number, and location.)  How many lasers do you have?  1?  2?  20?  Are you looking to determine laser to laser variation?  If so, set up your sampling to gather pieces from each laser at some frequency.  Is it time to time you want to understand?  Over how long a duration do you expect to see a change?  Sample within that duration until you see the change you are looking for.  How many cappers do you have?  Is it capper to capper variation you want to understand?  Do you see variation over time on the cappers?  How much?  Set up a sampling plan to capture that variation, across cappers.
    As you can probably tell from my questions, your best bet is to design the sampling strategy to answer the questions you have of the process.  You must first determine what it is you want to learn, devise your strategy to learn that i.e. your sampling plan, then go execute your strategy (I recommend a trial run before you launch any large-scale study.)
    Good luck!

    0
    #171006

    Fontanilla
    Participant

    See John P. Kotter’s Leading Change 1996, Harvard Press for 8 metrics you might use to assess your deployment.  It is really cultural change you’re aiming for and Kotter does a great job of it in his book.

    0
    #170995

    Fontanilla
    Participant

    Brandon
    Because his name is “Capt.Kaizen”.
    Regards 

    0
    #170994

    Fontanilla
    Participant

    It depends.
    You should also care to collect “practical experience”.Those certificates may challange your real knowledge in the field.
    good luck 

    0
    #170978

    Fontanilla
    Participant

    Because of your name I believe you should ask George (4) to send  “his Kaizen Slides” for your evaluation…

    0
    #170976

    Fontanilla
    Participant

    Do you suggest a third party assessor (similar to the ISO)?

    0
    #170116

    Fontanilla
    Participant

    Very excellent points.  I want to make another, which also builds on a previous poster’s message:  Get an Executive Champion.  You need someone with firepower backing you at every step in the process (even when you are wrong, and you will be from time to time.)  You don’t want to become the “bad cop” in the company when it’s time to make corrective actions.  Leave that to your champion.  You need to be feeding (good) advice to this person, who is hopefully already trained, going the training with you, or is going in the next class after you.  Your champion needs to be someone with enough “stripes” to effectively influence the rest.  The top of the company would be a good place to start looking for your person. 
    Now, get to work on the previous poster’s message on strategy, with your champion and strategy team, and get solving problems!
    BTW, the points about starting small, using the 7 basic tools, etc., are also excellent advice.  In fact, you’ll find over the course of your career, these tools will solve the bulk of your problems.  The Pareto Principle applies here for sure.  More advanced tools will be used on relatively few projects.
    Best of luck to you!!

    0
    #169190

    Fontanilla
    Participant

    What’s the debate?  Why wouldn’t you be able to evaluate this?  Graph it first.  Use a histogram.  What does it look like?  Next, put the statistics to it. 
    It really doesn’t matter how many different factors are involved.  Any process has innumerable factors.  What matters is the final output.  In your case, the press-out load.  So chart it!  Just do it!
    BTW, I worked on a press-in project a couple of years ago.  It seemed as though it was a simple geometry problem.  Couldn’t have been more wrong!  Turned out, it was the rust preventative the supplier applied to the product lubricating the hole, making press-in loads appear too low. 

    0
    #169189

    Fontanilla
    Participant

    Using Minitab, it’s too easy to get pairwise comparison.  It’s just a check-box.
    Now, if you don’t have Minitab or other software, you can still compare the means both analytically and graphically.  As always, I recommend you start with the graphical comparison.  It should be quite obvious which mean is different when graphed.  If it’s not, and you need to know which is different, there are (2) points I’d like to make here: 
    First, if the difference is so small that you cannot detect it graphically, go back and re-evaluate what you’ve really learned (think practically).  Most of the time, we are looking for big shifts or differences in processes as a result of a bad factor, or as a result of a corrective action applied.  If your result is so small, you may not have the right factors involved or they may not be at the right levels.
    Second, get yourself Golnick’s Cartoon Guide to Statistics.  It’s really simple but absolutely the most effective book I’ve seen to nail these concepts down.  It provides you the basis for the calculations and the formulae to do the analysis.

    0
    #168951

    Fontanilla
    Participant

    Koray, re-read Stan’s message.  You’re wasting your time.

    0
    #168950

    Fontanilla
    Participant

    Jason,
    Go to Gemba!  Sounds like you need to physically observe the assembly process.  If you are batch building, you can observe individual element times.  From these individual element times, you can determine (roughly) the time for each unit.  If it is sequential build, you need to record individual element times and total cycle time per unit.  Either way, you get the data you need to look at the deviation of the process.  You cannot simply take the lot times and determine a deviation that makes sense.  Mathematically you can do it, but it has no basis in reality.  For that, you must go to gemba.
    BTW, take copious notes and understand the factors driving cycle time differences.  You will need to observe quite a number of assembly events, maybe without visibly recording anything in order to gain the trust of the operators.  If you just show up and start taking time study data, you’ll likely get results not representative of actual production.  Even after working with one mold-change team for about a month, before time studying them, I had one tech say, “I don’t know why we’re working so hard.  We know it’s only you.”  The point is your presence will influence the results so take that into consideration.

    0
    #168813

    Fontanilla
    Participant

    You can use Logistic Regression in this case, to develop your relationships.  Because you are dealing with human’s, the behavior most likely cannot be considered, “normal”, (in the statistical sense!), so your non-parametric analysis tools will be most useful for the analytical side of the project.
    Often, the best tool or, “right” tool, for this type of analysis, will be your graphical tool(s).  Chart it.  Chart it in 2 and 3 axes.  Use your eyes to show you the relationships or lack thereof. 
    And, always run a confirmation test!

    0
    #168422

    Fontanilla
    Participant

    You would do well to heed the advice of one of our recent presidents, “Trust but verify.”

    0
    #168421

    Fontanilla
    Participant

    Maybe you should give us a little insight into the business and the size of these contracts.  If one contract sustains you for 12-18 months, then maybe you’re just treading water or even growing, not looking at foreclosure as some responders assume.  You can keep the name confidential if you’d like, but a better sense as to what your products and structure are will make responses to your post more meaningful.

    0
    #168139

    Fontanilla
    Participant

    So, while you’re waiting your 5 to 8 months (you can get reduced power of the test with a shorter time period too), make sure you are graphing the repair rates of the units.  If in the first month, the 40 control units have each 1 repair and the 40 modified units have, say only 10 repairs, you’d be feeling pretty good about the modification.  If in the second month, the trend is still the same, 40 vs 10, you’d be feeling even better.  Suppose the third month brings 40 vs 25, you’d lose some confidence but overall, your disposition would still be sunny.  Now, come the 4th month and you have 40 vs 30, you’d probably be about ready to claim the modification is a success but the upward trend might give you pause.
    My point is to keep an eye on the overall output and judge it practically and graphically, as well as analytically.  You might find that within a few months, you can declare victory and move onto the next battle.
    Good luck!

    0
    #168013

    Fontanilla
    Participant

    Most likely, you will need to use a non-parametric analysis rather than the t- or z-tests.  Time-based outcomes typically do not follow a normal distribution.
    Definitely try the other authors’ recommendations and put your data into dot plots, histograms, etc. and ask yourself, “Self, can I visually detect a pattern in my data?”  This will likely be all the analysis you need to make a practical decision.
    Good luck!
     

    0
    #167175

    Fontanilla
    Participant

    I’ll second Mike’s comments regarding part-time BBs.  That combination is a recipe for frustration, failure, and degradation of a Six Sigma program.  A Black Belt needs to work broadly, across functional areas and in such a capacity, to be able to effect change in each area.  Change doesn’t come about quickly.  It takes perserverance, persuasion, data (and more and more data!) as well as business acumen.  There is an old rule of thumb that if you want someone to remember something, you need to tell them at least 8 times.  A part-timer may be able to get the message across but the repetition of it and consensus-building that it requires will likely take quite a long time.  Probably, long enough that the first few messages will be forgotten.
    A Green Belt can be part time on a project.  A part-time Black Belt may be possible if you are a very effective time manager and communicator.  Some Black Belts work on 2 or 3 project simultaneously (and therefore might be considered “part time” on each).  If this is your part time arrangement, you’ll have a better chance at success.  If not, may the Force be with you!

    0
    #166601

    Fontanilla
    Participant

    The MSA v 3 manual on Attribute Gage R&R is about the worst I’ve seen.  They leave out a lot of links in the calculations — such as conversion of the “count data” to the probabilities, estimation of the Prob of chance, etc.  And they left out how they arrived at the Miss Rate and False Alarm Rates. 
    A better article is “When Quality is a Matter of Taste, Use of Reliability Indexes.” by David Futrell (Quality Progress, May, 1995) p81.  This is a good overview of the method and provides examples.  Also, crosstabulation methods are available in just about every stat textbook which includes nonparametric methods.  But these usually don’t cover the “Kappa” calculation. 
     
    I don’t have any insight or reference covering the “Miss Rate” and “False Alarm Rates”.

    0
    #165983

    Fontanilla
    Participant

    I’m a newly repatriated Master Black Belt.  It was a huge struggle to get back into Engineering.  The Director I worked under demanded I essentially replace myself!  Without a succession plan, I was going nowhere.  I worked at training others worldwide to assume the things I did and to make the regions self-sufficient.  Finally, in July, I said move me or I leave (well, not in so many words but the implication was there).  And, I was moved!  I didn’t go through HR.  I went to other Directors and asked if they had a position for me.  Therefore, I went job-hunting within my own company.  I have a very good reputation and was quickly taken by the group I wanted to get into.  It does however, highlight an important lack of planning on our continuous improvement group.  The original plan was 3-5 years as a Black Belt but when the end of the time period came, they were unwilling to let me go.  I had to force the issue.

    0
    #165629

    Fontanilla
    Participant

    All of the posts regarding the transition point etc being .05 entirely miss the point.  You base your p-value decision on the amount of risk you are willing to take.  How much is it worth to you if you are wrong?  If it is low risk, you might select a high p-value such as .1 or .15.  If you are risk-averse, you might use a p-value of .01.  Your risk of making alpha and beta errors is what you want to mitigate in your choice of p-value.  The commonly used .05 is a good suggestion, but doesn’t address your specific level of risk acceptance.
    So, what’s it worth to you if you are wrong?  Base your p-value decision on that.

    0
    #165048

    Fontanilla
    Participant

    What you need to do is to find reference samples that are known to fail.  You have a couple of options here.  If you’re interested in only pass/fail, then run the test with samples that will and will not fail (approximately 50/50 mix of each).  Don’t use “basket case” failures as that will bias your results.  You want “borderline” failures; ones you suspect the test may or may not be able to detect.  More samples is better.
    Your other option is if you are running this as a variable MSA.  Your Y is either voltage to failure or time to failure.  Again, taking samples that are known to fail, you compare your tester results with the standards and run your analysis.
    What are samples “known to fail?”  That is where you have some latitude.  They do not necessarily need to be representative of your actual products under test.  Work with your engineers/scientists to come up with a set of standards you can use for this investigation.
    Good luck!

    0
    #164252

    Fontanilla
    Participant

    Just to help (maybe!) clarify why you won’t do capability analysis with this study, you are probably making a huge assumption that the “distance” between levels of your rating scale are the same.  Most likely, this is not true.  Most likely they are non-linear.  That is, the difference between say a “1” and a “3” in the responses is mathematically a “2” which Minitab or other software will dutifully calculate for you.  However, did it really take 2 “units” of human, emotional response to go from a 1 to a 3?  Now ask that of all the respondents.  You get a highly non-linear set of responses.  To use a linear method of analyzing it just doesn’t make practical sense. 
    What makes practical sense is the Pareto approach mentioned earlier.  Put on your “Graphical” cap (from your Practical, Graphical, Analytical wardrobe), and then your “Practical” hat.  Forget about the “Analytical” at this point.  You may come back to it later, like Darth did, after much work.
    Regards

    0
    #162149

    Fontanilla
    Participant

    Juvy,
    Implementing a sampling plan on an unstable process isn’t a good idea.  You will need to achieve stability prior.  Use of control charts is highly recommended here.
    Simply changing from 100% inspection to random sampling most likely will improve your acceptance rate (currently 50% which is terrible) at the expense of sending more defects to your customer. 
    So, you have to choose.  Which is more expensive?  Rejecting 50% of your lots of production, or angering a customer by sending defects.
    Good luck!

    0
    #159594

    Fontanilla
    Participant

    You should consider your Components of Variance as well.  Draw out the “tree” structure and then, look at what your subgrouping strategies will be.  The COV will tell you where your process is stable/unstable and will tell you which family of variation should be attacked first.

    0
    #158906

    Fontanilla
    Participant

    Sounds like you have some Shingijutsu background….

    0
    #156745

    Fontanilla
    Participant

    What is the difference between “defect” and “defectives”
    cheers

    0
    #153970

    Fontanilla
    Participant

    Hi John,  I would appriciate it if you could also send me some sample reports too. [email protected]

    0
    #153515

    Fontanilla
    Participant

    Sick sigma has fooled many more people than just you. 

    0
    #151677

    Fontanilla
    Participant

    Homogeneity and rational subgrouping seem to be requirements of control charts.
    http://www.sei.cmu.edu/str/descriptions/spc.html
    “Next, the notions of homogeneity and rational subgrouping need to be understood and addressed. Homogeneity and rational subgrouping go hand in hand. Because of the non-repetitive nature of software products and processes, some believe it is difficult to achieve homogeneity with software data. The idea is to understand the theoretical issues and at the same time, work within some practical guidelines. We need to understand what conditions are necessary to consider the data homogeneous. When more than two data values are placed in a subgroup, we are making a judgement that these values are measurements taken under essentially the same conditions, and that any difference between them is due to natural or common variation. The primary purpose of homogeneity is to limit the amount of variability within the subgroup data. One way to satisfy the homogeneity principle is to measure the subgroup variables within a short time period. Since we are not talking about producing widgets but software products, the issue of homogeneity of subgroup data is a judgement call that must be made by one with extensive knowledge of the process being measured.
    The principle of homogeneously subgrouped data is important when we consider the idea of rational subgrouping. That is, when we want to estimate process variability, we try to group the data so that assignable causes are more likely to occur between subgroups than within them. Control limits become wider and control charts less sensitive to assignable causes when containing non-homogeneous data. Creating rational subgroups that minimize variation within subgroups always takes precedence over issues of subgroup size.”
    Dan
     

    0
    #151578

    Fontanilla
    Participant

    I agree.  Lean before Six Sigma (the argument for stability is a valid one though.  Without stability, “Lean” can be a frustrating experience.) 
    The  most successful blend of the two I’ve seen in my company (a very large automotive parts manufacturer) has been where we did a VSM on the entire process, including the offices as well as the plant, and then assigned teams to work on the “starbursts”.  Some of those teams did Kaizen workshops, some of them became Green Belt projects.  It’s worked extremely well.
    The key is to understand what the “big picture” of the business is BEFORE you go headlong into improvement activities.  Any improvement initiative MUST be tied to the Strategic Business Objectives of the company.  Without that, you will have difficulty finding consistent direction and support.  With it, you will find yourself on the road to prosperity :-)

    0
    #150187

    Fontanilla
    Participant

    (although it might be wise to start with some simple lean tools to settle it down first.) 
    What crap !

    0
    #149655

    Fontanilla
    Participant

    First of all, I disagree with the comment that the reason it is losing credibility is due to an overuse of statistics. Statistics is a tool in the toolbox and should be used when appropriate. The key to the success of six sigma has been its ability to discern a root cause when it wasn’t evident. I would agree that if your arm has been cut off you need to stop the bleeding, but before they give me replacement blood I sure hope they do the blood tests. I might also want to avoid any lost limbs in the future. If we want to react to outliers and spend resources on noise, then simply react to outcomes. If you want to drive changes at the root cause level, react to validated root causes.
    Sorry for the digression. Six Sigma is losing its luster because the talent pool is being diluted. Just because you can spell DMAIC and have a successful project or two does not mean you are a MBB.

    0
    #149574

    Fontanilla
    Participant

    Elimination of defects is nonsense because defects relate to the specification. As the founder of six sigma, Bill Smith said :

    “Another way to improve yield is to increase the design specification width. This influences the quality of the product as much as the control of process variation does.” (page 46)
    You can set whatever defect level you like by using the great Mr Smith’s method !!

    0
    #148643

    Fontanilla
    Participant

    If Toyota is successful AND Toyota uses TPS. If Ford uses TPS, it will be successful.
    True or False

    0
    #148569

    Fontanilla
    Participant

    Explain what you’re trying to accomplish.
    My experience doing SS in a service / transactional organization has been interesting.
    Hear are a couple of opportunities to pursue before SS;
    1. Most service organizations have a limited number of people with formal process / operations experience.
    2. Given the few number of people with process / ops experience, existing metrics are typically awful.
    3. So you have people with limited experience and metrics running an op.
    4. Not having metrics and having to create a data collection process for projects to get metrics is time consuming and costly. Consultants love this chi ching.
    I like to first start by providing formal business process management training to the process owners.
    During the training they learn and gather 6 fundamental process metrics. They also learn how the metrics affect one another.
    They also create process maps and learn simple process analysis techniques.
    We do a bunch of other stuff as well but I’m keeping this short.
    The result is that you end up with people who understand their processes and key metrics to manage their processes. Now that metrics are in place and now that managers understand their processes better, six sigma projects can be cranked out quickly and a lot of the change management stuff you don’t have to worry about because the managers understand what’s going on.
    This is real brief, there’s more to it but you get the point. Making changes to a process that no one knows how to manage is mute and unsustainable.
    The majority of lean and six sigma materials never talk about teaching managers how to use key process metrics to govern and understand their business.

    0
    #147285

    Fontanilla
    Participant

     Grand Canyon University MBA emphasis in Six Sigma
    If you have a more technical interest try ASU MS Engineering Ira Fulton School of Engineering or University of Texas at Austin.
     
    There are are several moe Google it. or Ask

    0
    #145466

    Fontanilla
    Participant

    Stats God, you obviously have made a typo here.  It should be “Stats Dog” because you certainly don’t know what you are talking about.  Maybe  you have some small kernel of knowledge about this subject but obviously have difficulty expressing yourself.  Have your mother submit your next posting for you, o.k.?
    Mis-application of the tools used in hypothesis testing and ANOVA certainly could lead a small-minded individual like yourself come to the conclusions you’ve posted.  However, when taken in the context of the problem and how they are applied, the hypothesis testing and ANOVA techniques are quite powerful.  Maybe you should do some research before you post.
    But let’s stick to the original question, shall we?  The answer to the question is whether or not the control chart requires time factors or, can the subgroups be based on categorical factors.  Yes, you can subgroup on categorical factors.  You would do this to determine if you have significant difference between clients, business type, etc.  You might also try using a Multi-Vari chart and include the time factors to show performance.
    And, pay no attention to those individuals who sometimes get onto this site and post nonsense like Stats Dog does.  His point might be valid in a particular context but since he offers no further explanation, you can take it under advisement but don’t let it stop you.
    Good luck.

    0
    #145464

    Fontanilla
    Participant

    It’s not quite clear from your description of the problem what you will be doing here.  Are you implying that some sort of reject level is acceptable?  I.e. a “fail” of an individual cavity within the 20-cavity part doesn’t necessarily “fail” the entire part?  Or, are you saying that you have a 20-cavity tool that produces 20 individual parts per cycle? 
    There are numerous examples of acceptance sampling.  When selecting a plan, you determine your AQL or, “Acceptable Quality Level.”  There are tables you can use to determine your sample size given an AQL and lot size.  One common, but dated, table is the ABC-STD-105. In general, the rule of thumb is that the absolute size of a random sample is more important than its relative size compared to the size of the lot.  For example, if you take 5 samples out of a lot of 50 (10%) which has 4% defects, you will accept the lot of 50 about 81% of the time.   If you take 100 samples from a lot of 1000 (still 10%) which has 4% defects, you will accept the lot less than 2% of the time.
    You can look up the values in a table in Quality Control textbooks or, you can use Minitab to create the probability of acceptance for you.  Model your situation based on the hypergeometric distribution if your trials are not independent and mutually exclusive or the binomial distribution if they are.
    Good luck! 

    0
    #145463

    Fontanilla
    Participant

    I’ve seen a several companies that have the Black Belt role as a stepping stone to advancement into the senior management ranks.  These companies all have full-time Black Belt positions within a Continuous Improvement functional group.  Some are automotive, one is medical, one is diverse industrial.  At one automotive parts supplier, ALL director-level positons require Master Black Belt experience.  At another, Black Belts are equivalent in rank to a Quality Manager.  At ours, Black Belt certainly helps in the career progression but is not a required element (but I’m working on that!) 
    But, regardless of the company, how the individual performs in the role should determine the next steps.  For example, an excellent performer might naturally step up into a role with greater responsibility after having proven himself or herself as an effective team leader, change agent, and problem solver.  A weak performer might be laterally transferred or even given a reduction.  In severe cases, they may be asked to leave the company altogether.
    Bottom line is that you’ve asked a difficult question and one that most likely requires a coordinated discussion between Human Resources and your Strategy Board to determine the answer.  The basic question, “How do I want to leverage my Six Sigma expertise?” is one that shouldn’t be taken lightly.  Maybe, it’s a Black Belt project for your Strategy Board!
    Good luck!

    0
    #144894

    Fontanilla
    Participant

    I concur with DeanB’s approach.  If you have a “one and done” project mentality, then your program is in trouble.  If people are either excited to begin another project or are required to work on more, or, in the best case, both, then your program has a fighting chance.  If people are only in it for the certification and wash their hands of DMAIC after they get their Green Belt certificate, then you need to work on your senior leadership and getting them to start championing the program.
    Good luck!

    0
    #144821

    Fontanilla
    Participant

    You ignorant jerk !
    Have some respect for one of the greatest men in history of industry.
    Who is your hero … that fool Harry ?

    0
    #58881

    Fontanilla
    Participant

    Thanks Guys, these input gave lot of insight.
    We have identified 2 projects in technology area. One is to improve link stability of IPLC’s and other is the incident reporting.
    These inputs after a brain storming session with technology owners. I had put forward the suggestions that came from you guys and it certainly sparked some ideas.
    Will post further queries as and when I face them
    Thanks
    Dan

    0
    #143352

    Fontanilla
    Participant

    What utter nonsense. What Hans really means is that he has been selling 6 sigma for many years.
    Yes, six sigma is great for sales … as long as you are selling six sigma not buying it !
    I can just imagine how your sales team is going to react … you will be their laughing stock !
     

    0
    #142267

    Fontanilla
    Participant

    It will cost you…are you ready to start spending?

    0
    #141537

    Fontanilla
    Participant

    Thank you.  That is what I thought.  A black belt told me I can still use a Cpk even though my data is not normal.

    0
    #139928

    Fontanilla
    Participant

    Grasshopper, can you first define and articulate the difference between a defect and defective?
    Is there a correlation between the two?

    0
    #139878

    Fontanilla
    Participant

    There are tons of training materials, mostly recycled crap from 10 years.
    Some materials emphasize stats.
    Some emphasize processes and transactions
    Some emphasize add on consulting services
    What types of problems are trying to solve and what kind of leadership commitment do you have. These will be factors in determining which materials to purchase.
    None of the materials guarantee results.

    0
    #139293

    Fontanilla
    Participant

    Thanks for the reply, Brit.  Makes perfect sense.

    0
    #58802

    Fontanilla
    Participant

    Gregg,                  Good posting; you’re on your way; let the data talk.
                                                                      Dan

    0
    #58801

    Fontanilla
    Participant

    Gregg,Nice posting; you’re on your way; good luck and let the data talk.
     

    0
    #137911

    Fontanilla
    Participant

    You say you’re not in a manufacturing process.  However, you really are.  You are transforming an input into an output through your team of clerks.  Think of it in terms of manufacturing.  Your time/motion studies for Manufacturing are exactly the same as in this process.  The work content is different but the process is the same.  How much time does it really take to process the data?  What is your First Time Quality?  What is the flow?  Are there non-value added steps?  What is the Total Cycle Time?  Work Cycle Time?  Value-Added Cycle Time?  You can do Value-Added Flow Analysis, Value Stream Mapping, Time/Motion studies and probably dozens of others.  Always keep in mind the Lean Principles and the House of Lean.  Answering the emails and phone calls is harder to quantify in terms of value-added but should be considered.
    Good luck!

    0
    #127072

    Fontanilla
    Participant

    Thank you C.R.Shetye!
    It seems like you have a great deal of experience and pay attention to details. You answered my questions precisely to the point!
    Thank you, Dan

    0
    #127049

    Fontanilla
    Participant

    “Your statement that they are using the same machine, makes me believe they are on different shifts.  This can be a HUGE factor to consider.”
    Yes, night shift is slacking off because nobody is around and they get 15 % extra in pay…not fair to the owner.
    Dan
     

    0
    #127012

    Fontanilla
    Participant

    “So you would have to first define what is the minimum expectation.  Then work with HR to link the expected output with compensation/reward/recognition.”
    I think this is the perfect solution to my problem :)
    Thank you
     

    0
    #127003

    Fontanilla
    Participant

    tottow,
    “What happens if you cannot motivate operator X enough, move him/her out the door, hire operator X3 and he/she starts producing 42,000′ per day. Is it then operator X2’s turn to get motivated? ”
    YOU GOT IT! Look for the best possible entitlement untill you reach that accross the shifts. If X3 could do that, damn right I will be all over X2 for not matching or not being statistically close to the best performance of X3. Assume quality checks are done on the regular basis.
    Scott,
    I understand that you are trying to find OTHER sources of variables. However, what if I tell you that the operator’s skill/experience and motivation are the only sources of variation in footage produced…what would YOU recommend in order for the weakest link to be in line with everybody else on this machine?
    Does anybody else can share their experience in posting metrics on the main board?

    0
    #126982

    Fontanilla
    Participant

    6,000 x 20 working days = 120,000′ less per month
    That’s is a big difference if you pay them tha same wage and they work on the same machine/same types of jobs…

    0
    #126976

    Fontanilla
    Participant

    The process owner is involved. We have 6 different machines. The questions here is why 3 operators on the same machine produce different (statistically) amount of footage on average/day in one month time frame?
    For example: In August, operator X produced AVG. 30,000′ per day. But, operator X2 produced 36,000′. SAME machine, different shift. What do you do with the operator X?
    Dan

    0
    #117597

    Fontanilla
    Participant

    can you please send a copy to [email protected]  Thanks.

    0
Viewing 100 posts - 1 through 100 (of 167 total)