iSixSigma

Dillon

Forum Replies Created

Viewing 100 posts - 1 through 100 (of 129 total)
  • Author
    Posts
  • #183240

    Dillon
    Participant

    This is an interesting topic that I like to discuss with people.  I think Six Sigma, software development life cycle (SDLC), and ITIL are very similiar in some ways but very different in other ways.  They are all there to help with specific things related to the work being done.  ITIL is focused on support, SDLC is focused on software development, SS is focused on process improvement.  They can overlap and be used at the same time, but are used to solve different things.  I also get a lot of questions about how SS and project management are related.  I tell people that PM is used to run a SS project and you must do both.  I think the same thing is true for Software development and implementing ITIL.  Use PM to oversee these projects and don’t leave out PM when using these tools.

    0
    #183234

    Dillon
    Participant

    I have had this happen before.  The best way is to identify the process owners up front and make sure they understand they must own the process when it is done.  I work in the IT area and it is very common that once a software solution is put in place no one wants to own it or be accountable for the results.
    It is important to plan the exit strategy before the project begins.  I would determine who has the potential to loose or gain the most and convince them it is in their best interest.

    0
    #170810

    Dillon
    Participant

    To Louise,
     Did the software at Dbar-innovations posted by Vince help you ? Did you try the live test drive and use the analysis tools ? I am curious,
    Thanks,
     Doug

    0
    #165492

    Dillon
    Participant

    That’s awsome!

    0
    #162822

    Dillon
    Participant

    I’m not sure I agree with your argument.  However, first let me restate the question: Can interval data be discrete?  I agree the number of children is discrete—Poisson distribution and not binominal like you suggest—child vs. no child.  Also, one can have an average of count data.  The Census Bureau reports the average number of children per household—2.2 children per household.  I agree, one will NEVER find a household with 2.2 children, but that’s not to say the average cannot be computed because, as I’m suggesting, it is discrete, but also interval data.  An average is also computed with the Poisson distribution to determine lambda.  No one would ever dispute the number of deaths due to a horse kick in the Prussian army could every be anything other than an integer when Poisson developed this distribution; however, to determine the probability of an event(s), one needs to compute the average to determine lambda.  So my question still is: Can interval data be discrete?

    0
    #162819

    Dillon
    Participant

    If the CI are close, i.e., they just barely overlap or just barely miss from overlapping, I would run a statistical analysis.  On multiple occassions I’ve seen confidence intervals that were “close” and had the hypothesis test indicate differently–the slightly overlapped but were statistically different and vice versa.  It doesn’t cost anything but a few key strokes, so it’s better to have the p-value and know for sure than to make a decision based on a graph because graphs can be misleading.

    0
    #61692

    Dillon
    Participant

    Thanks so much
    dungphamv@yahoo.com

    0
    #61690

    Dillon
    Participant

    Thank you so much.
         Do you happen to know some site that I can refer to…..

    0
    #160586

    Dillon
    Participant

    I’m currently working on a project where we are dealing with a specific defect within a product.  There are multiple defects possible in this product but we were assigned to work on a specific defect in the product.  There is another current project looking at another of the defects in the product.
    It would just be too hard to work a project to look at all of the defects at once.  We will track to see if corrective actions to one defect affect the other elements of the product.

    0
    #156791

    Dillon
    Participant

    BTDT,
    I don’t disagree with what you’re saying but I took the request to mean they just wanted to more easily identify violations of the other seven tests Minitab conducts.
     
    Regards,
    Doug

    0
    #61056

    Dillon
    Participant

    Brit,
    Thanks!  That makes sense!  I appreciate you taking the time to straighten me out!
    Doug

    0
    #61051

    Dillon
    Participant

    Good morning Brit!
    One quick question for you…in your example, you use +/-5% for sample precision – would this be the same thing as reliability?  What I mean is, if I want to have a 95% confidence level and 90% reliability that there would be 0 defects…then Z = 1.96, w = 0.10?  Is this the same thing?
    And by the way, thanks again for your help.
    Doug

    0
    #61050

    Dillon
    Participant

    That is correct Brit!  Thanks for your entry…it helped a lot!
    A little background, coming from manufacturing, I dealt a lot with continuous (or variable depending on what you want to call it) data…not a lot with attribute…now working in healthcare…almost everything seems to be attribute!
    Thanks again to all who answered!
    Doug

    0
    #58706

    Dillon
    Participant

    1) You can’t average percentages.
     
    2) Think about your process using numbers.
      At station one you input 100 widgets and at a throughput of .8 you would have 80 widgets that move on to station two.
      At station two you input 80 widgets and at a throughput of .9 you would have 72 widgets that move on to station three.
     At station three you input 72 widgets and at a throughput of .8 you would have 57 widgets that move on to station four.
     At staion four you input 57 widgets and at a throughput of .9 you would have 51 widgets that move on to station five.
     At station five you input 51 widgets and at a throughput of .8 you would have 40 widgets that made it through all 5 stations.
     
    When I multiply 0.8*0.9*0.8*0.9*0.8 I get 0.417.  Since I truncated instead of rounded at each station the math works.
     
    Make sense?

    0
    #58704

    Dillon
    Participant

    Read Lean Six Sigma for Service by Michael George.  There are several case studies referenced.
     

    0
    #58445

    Dillon
    Participant

    Here is a story from the WSJ that you might find relevant:
     
    Coffee on the Double
    As It Adds to Menu, StarbucksSeeks to Speed Up Service; ‘This Is a Game of Seconds’
    By STEVEN GRAY Staff Reporter of THE WALL STREET JOURNALApril 12, 2005; Page B1
    Steaming cup in hand, Kristin Kipper raced out of a Starbucks in downtown Chicago one recent morning and scowled. “I missed my bus,” she complained. “I only ordered a hot chocolate, so I don’t know what’s up with that.”
    Ms. Kipper, 22 years old, had just stood for about two minutes in a line of about 10 people. She is one of millions of Americans who wait each morning to shell out, on average, between $3.50 and $4 for one of Starbucks Corp.’s premium beverages. But how long is too long to wait for a vanilla latte?
    The question is a key one for the world’s largest chain of coffee shops. In a survey last year by market-research firm Mintel International Group Ltd., 64% of Americans said they pick a restaurant based on how much time they have. And as Starbucks broadens its menu with hot breakfast sandwiches, which could take even longer to prepare than a time-consuming Double Chocolate Chip Frappuccino Blended Creme, the Seattle-based chain is loath to lose any of its 33 million world-wide weekly customers to Dunkin’ Donuts or other rivals who are also trying to beat the clock.
    “This is a game of seconds,” says Silvia Peterson, Starbucks’s director of store operations engineering, adding that she and her team of 10 engineers are constantly asking themselves: “How can we shave time off this?”

     

     

     
    A few years ago, engineers noticed that “baristas” — the Starbucks employees who prepare drinks — had to dig into ice bins twice to scoop up enough ice for a Venti-size cold beverage, Starbucks’s biggest. “The old Venti scoop didn’t give you enough ice,” Ms. Peterson says. Engineers experimented with ceramic coffee mugs, which then led them to develop one-piece plastic “volumetric ice scoops.” But the handles kept breaking, so engineers had stronger ones made. The new scoops helped cut 14 seconds off the average preparation time for blended beverages of about one minute.
    Efforts like these have helped Starbucks outlets increase their average yearly volume by nearly $200,000, to roughly $940,000, since 1999, executives say.
    Among other time-conscious players in the hotly competitive $476 billion U.S. restaurant business, sandwich chain Cosi Inc. of Deerfield, Ill., has installed large menu boards on the walls of its restaurants so customers won’t have to take time at the head of the line deciding what to order. Wendy’s International Inc. of Dublin, Ohio, is touting its new double-sided grill that cuts the cooking time of a four-ounce hamburger patty to 85 seconds from more than five minutes. Caribou Coffee Co., a 310-unit chain based in Minneapolis, has stopped requiring signatures for credit-card purchases under $10.
    At Starbucks, it takes about three minutes on average from the time a customer gets in line until the order is delivered. That is down about 30 seconds from when the company started measuring five years ago. (Waiting times in busy urban outlets at peak hours can be considerably longer than the average.) Times for drink preparation range widely, from less than 20 seconds for a Tall black coffee to about 90 seconds for the Venti Double Chocolate Chip Frappuccino Blended Crème.
    Starbucks was forced to focus harder on speed of delivery as its growth exploded in the 1990s. In 2000, the company recruited Ms. Peterson, an engineer by training who had spent years at Burger King and Denny’s Corp., to start an industrial-engineering team that would break down the beverage-preparation process in hopes of correcting inefficiencies.
    One step was to stop requiring signatures with credit-card purchases under $25, encompassing a larger group than Caribou Coffee’s $10 cutoff. Processing credit cards had become the longest part of the transaction at the cash register, taking on average 30 seconds, Ms. Peterson says. Eliminating signatures sliced the time to 22 seconds.
    In 2003, the company created a new role for restaurant employees called a “floater.” The floater is like a quarterback, directing behind-the-scenes traffic, running to the storage room for syrup or coffee beans, and serving as back-up cashier or barista. One recent morning in a Chicago Starbucks, the floater moved up and down the line of waiting customers, took orders, marked them on the sides of cups and passed the cups to the baristas, who could then start making drinks before customers reached the register.
    Floaters added costs, but Starbucks executives say they are worth it because restaurants can serve more customers more quickly. Since their creation, floaters have shaved 20 seconds from the overall service time, largely by getting the drink-preparation process started sooner. It also has a psychological benefit, says Jim Alling, president of Starbucks’s U.S. division. “The customer doesn’t feel like he’s waiting in line. It’s like a ballet, a well-choreographed play,” he says.
    Since 2000, Starbucks has been slowly installing special espresso machines in all of its 9,000 world-wide restaurants. With the push of a button, these machines grind coffee beans for espresso and then brew the beverage, allowing baristas to focus on steaming milk for latte drinks that are heavily in demand. The machines produce a more consistent amount and flavor of espresso, and have cut 24 seconds off the average time it takes to “pull an espresso shot,” bringing it to about 36 seconds. “If there’s ever a silver bullet in our world, that’s it,” Ms. Peterson says.
    Despite the need for speed, Starbucks executives, who prefer to have their locations called “stores,” are sensitive to being lumped in with fast-food outlets. “You don’t want customers to feel they’re in an Indy-500 pit stop,” Mr. Alling says. “It’s a real trade-off — to move quickly, and not be rushed.” The sharper focus on speed doesn’t seem to have hurt the chain’s order-accuracy rate, which has remained consistent at about 99.4%, the company says.
    This spring, Starbucks is continuing its gradual nationwide rollout of hot breakfast sandwiches, including a sausage, egg and cheese combination on an English muffin, and another with eggs Florentine on an English muffin. The sandwiches offer an opportunity for increased revenue but take about 90 seconds to heat.
    How to sell lots of hot sandwiches without slowing service “is probably the biggest question or issue with rolling out a warm item, in addition to product quality and consistency,” says Andrew Barish, restaurant-industry analyst for Banc of America Securities LLC in San Francisco.
    Executives insist the 90 seconds needed to heat sandwiches won’t increase the targeted three-minute average service time. But they are fuzzy on the details of how they will achieve this.
    Starbucks tested warming the sandwiches in various types of microwave ovens, which were fast but didn’t meet expectations for taste and quality. Convection ovens produced better quality but were too slow. Now Starbucks plans to use a hybrid convection-and-microwave oven that it says will satisfy both concerns.
    Starbucks also is considering assigning a floater to deal only with the sandwiches on busy mornings. Executives say they hope to learn more as the sandwiches are introduced in different markets.
     

    0
    #114609

    Dillon
    Participant

    Pablo,
    Thanks for the feedback. It’s greatly appreciated.
    Doug

    0
    #111579

    Dillon
    Participant

    John:
    Please see message 60090 which was in response to your suggestion. 
    Thanks

    0
    #111577

    Dillon
    Participant

    Mark, sorry the previous message was intended for John Noguera. 

    0
    #111576

    Dillon
    Participant

    Mark:
    If there is a more direct way to accomplish this task, I am certainly interested, however I did try this and ran into a couple of challenges.  First, Minitab does not allow zeros in the data (which mine has), and second my sample sizes are not equal. I have between 150-200 data points for each.  How would one decide which to eliminate to make them equal?  They were both taken over a period of a week and represent Monday-Friday by design to ensure unique conditions present on some days were included.  Thoughts?

    0
    #111535

    Dillon
    Participant

    John:
    Thank you for the clarifications and the information on the exponential testing in Minitab.  As you can tell I am still in the learning stage.  
    Doug

    0
    #111528

    Dillon
    Participant

    Stan/All:
    This is great feedback and I plan to follow your suggestion and compare the confidence intervals. This seems to be the most straight forward solution to the problem. Since this is non-normal data, I believe that I should use the “one sample sign test” to obtain each distributions confidence interval correct? 
    By the way, I did go back and re-look at the help screens within Minitab as suggested. Try the following yourself:  (open the program, click 1. help (from the top tool bar), 2. Statguide, 3. Help topics, 4. Nonparametic, and then 5. click on any of the following tests (Mann-Whitney, Kruskal-Wallis, Mood’s Median, and Friedman). You will find the Summay screen for each of these tests assumes that the two distributions should have the “same shape” and “equal variances”. 
    I contacted Minitab tech support and they are consulting with their Phd. Statistician on the issue. The person I spoke with did however pass along some information from one of their references that I thought might be worth sharing.  The reference was the “Third Edition – Handbook of Parametric and Nonarametric Statistical Procedures by David J. Sheskin (pg.757-758): 
    “Various sources (e.g., Conover (1980, 1999), Daniel (1990), and Marascuilo and McSweeney (1977)) note that the Kruskal-Wallis one-way analysis of variance by ranks is based on the following assumptions: a) Each sample has been randomly selected from the population it represents; b) The k samples are independent of one another; c) The dependent variable (which is subsequently ranked) is a continuous random variable.  In truth, this assumption, which is common to many nonparametric tests, is often not adhered to, in that such tests are often employed with a dependent variable which represents a discrete random variable; and d) The underlying distributions from which the samples are derived are identical in shape.  The shapes of the underlying population distributions, however, do no have to be normal.  Maxwell and Delaney (1990) point out that the assumption of identically shaped distributions implies equal dispersion of data within each distribution.  Because of this, they note that, like the single-factor between-subjects analysis of variance, the Kruskal-Wallis one-way analysis of variance by ranks assumes homogeneity of variance with respect to the underlying population distributions.  Because the latter assumption is not generally acknowledged for the Kruskal-Wallis one-way analysis of variance by ranks, it is not uncommon for the sources to state that violation of the homogeneity of variance assumption justifies use of the Kruskal-Wallis one-way analysis of variance by ranks in lieu of the single-factor between-subjects analysis of variance.  It should be pointed out, however, that there is some empirical research which suggests that the sampling distribution for the Kruskal-Wallis test statistic is not as affected by violation of the homogeneity of variance assumption as is the F distribution (which is the sampling distribution for the single-factor between-subjects analysis of variance).”
    The Mann-Whitney U test and found the exact same paragraph as above (except it compared the Mann-Whitney to the t test for two independent samples). (pg. 423-424)
    The section covering the Friedman two-way analysis of variance by ranks test assumptions said nothing about variance. (pg. 845-846)
     
     
    that the Kruskal-Wallis one way analysis of variance test by ranks does assume homegeneity of variance with respect to the underlying population distributions. However, it is not uncommon for the sources to state thatalso indicated that violation of this HOV assumption may at times be considered justified by some in lieu of the single-factor between -subjects analysis of variance. 
    By the way, technical support is also going to look at why their help sceens vary on this subject. 
    Thanks to all for your help on this one.  

    0
    #110921

    Dillon
    Participant

    I would be interested in hearing more about how you have successfully applied 6 Sigma.  Please drop me an email note letting me know how I can reach you. 
    Thanks,
    Doug

    0
    #110919

    Dillon
    Participant

    Jonathan,
    I hope you’re still monitoring this site.  I would be interested in the case study you spoke of.  I work for the construction division of a large retailer who would like to expand the use of 6 Sigma.
    Thanks,
    Doug

    0
    #60317

    Dillon
    Participant

    Maria, thank you for your feedeback and advice.  All good points which I will take to heart as I move forward.  It really helps having someome provide the insight from experience. 
    Thank you.   Doug
     

    0
    #60309

    Dillon
    Participant

    In my opinion, the process owner is the person that will be left with the responsibility of making sure the new process is continued and tracked for effectiveness once the BB is no longer on the project.  In almost all cases, this should be the department director or manager.  The champion of any project should be the person who has the authority to make changes during the project and after the project.  Those changes might include policy changes to personnel changes.  We sometimes have trouble appointing the process owner and champion when the project might fall under several different departments.  At that point, we make sure that all involved understand that for the parts of the process that are effected, the appointed process owner and champion have the authority to make changes.
    Very few of our physicians actually hold positions within the hospital administration, and therefore aren’t often appointed as process owners or champions.

    0
    #108339

    Dillon
    Participant

    Thank you kbailey for your response. 

    0
    #108325

    Dillon
    Participant

    Hello all,
    I am also working on DSO reduction.  Does anyone have any helpful information?  How to properly manager your accounts receivable?

    0
    #107831

    Dillon
    Participant

    Manish,
    I applaud you for being so dedicated to your job that you already follow a rigorous process that makes your projects successful.  However, not everyone is like that – which can be where Six Sigma comes in and helps out.  Six Sigma provides structure to a project – it provides a path for people to follow in order to drive towards a successful completion.  The integration of the Lean Principles into Six Sigma (or vice versa) makes it even more powerful and successful.  Most of the companies that I have worked for have always complained about solving the same problem over and over again – guess what – it is because they did not lock in their improvements.  They spend there time going from one issue to the next, firefighting.
    I have read a lot of the posts and debate that this topic has launched – which I think is great.  Discussions like these will help produce a better, more robust system than what many have in place today.  I am also disappointed, but not surprised, to hear the abuse that the methodology is taking at various companies around the country and world.  Six Sigma and Lean, Lean Sigma if you will, should only be used when necessary.  If you already have the answer to a problem, don’t force the methodology on the solution.  Implement the solution, lock it in place, and move on.  Lean Sigma should be used when the solution is not known – then it can be one of the most effective tools in your tool box.  To steal from another comment that I read in this chain of posts, just because you have a hammer in the tool box does not mean that you have to use it.  Use it when you need to.  Otherwise, it is just waste.
     
    Sorry for rambling…
    Doug

    0
    #107827

    Dillon
    Participant

    Lee,
    Usually the sampling during a process audit isn’t trying to be “statistically valid.” Most process auditors will audit 3 to 5 people within each unique group of people, and assess from there whether the process is defined and in control. It’s quite possible that noncomplaince will be overlooked, but it’s a trade-off.
    —Doug
     
     

    0
    #107824

    Dillon
    Participant

    Although we are in a very different business, my company has a few similarities to yours.
    We make plastic injection molds and we are a molding house as well.  We are a small company (60 people in all) and “engineering” is 6 people. (We also have manufacturing engineers, quality engineers, admin support, and other groups that help those six very technical people.) We are ISO9001 certified. (We were ISO9002 certified for six years.)
    I am a trained Black Belt and am a quality manager (also quality engineer). I brought six sigma to this company, but I never called it that because I didn’t want to get into the discussion of how humans would have morth than 3.4 defects per million in what they do. We do NOT have a “six sigma program.”
    We have benefited from so much of the methodology of Six Sigma (process mapping, variation reduction, root cause, gage R&R, DOE, ANOVA, Cpk, etc). I have trained individuals in the things they need, when they need them.
    I agree that there is no “silver bullet” except for werewolves. There’s not one or two things that will make you 100% better. There’s probably 200 things that each may make you 0.5% better.  But 200 times 0.5% is 100% better.
    If nobody in your facility has been trained in the tool kit of six sigma, I bet it would be a real benefit if somebody pursued that and your company praticed the six sigma philosophy.
    I venture that a full blown “six sigma program” wouldn’t have the payback that you are looking for.
    Just my $0.02.
    —Doug

    0
    #60282

    Dillon
    Participant

    Old:  Great and appropriate question.  One that I has been going around in my head for the last couple of weeks.  
    Couple of points of clarification.  The total patinet wait cycle (as I have defined it), is from the time they sign in at the window to the time the work up process is completed.  At this point, the patient is now “available” to be seen by the Dr. or RN.  
    On average, the total time to complete the above is around 30 minutes, of which 57% is the up front wait mentioned and 43% is work up process itself.  Variation in the up front wait portion is 3X what the  work up cycle is.
    You are right that this is a scope change from the original, but I keep looking at the majority of the total customer wait cycle being driven by the “waiting to start the process” and felt I could not ignore this area of opportunity to improve the customer expeience.  Maybe I am to new to this and trying to do too much in one project as you suggested. 
    In light of what I shared here, let me know if you have any different thoughts. 
    Thanks,  Doug 

    0
    #60280

    Dillon
    Participant

    Thank you Atul.  The output variable that the customer ultimately sees/feels is how long it takes them to get through the initial wait plus the work up process combined. I have done a correlation analysis on a number of items against this total wait time and the only two items which emerged were 1. number of patients that are in the waiting room at any given time (or what could be called backlog), and 2. (the type of client service needed. I may not have mentiond that this is a facility that takes both appointments and walk-ins, so at, present neither of these variables are controllable.  In addition, I have actually broken the data down to look at patient arrival quantities by hour and day. 
    Based on the above, my approach has been to focus on the variables that can be controlled (patient work up cycle) and evaluate the feasibility of adding an additional person from the existing staff to help during peak hours (typically a.m.), to help reduce the up front wait time to begin the work up process. 
    Any other thoughts?
    Thanks,   Doug 

    0
    #60277

    Dillon
    Participant

    Thank you for your response David.  Could you elaborate on how to calculate the capability for the portion of the process that is Exponentially distributed?  I understand how to do this on a Normal distribution, however I am not clear on how to do this when the mean and sigma are equal (Exponential/Non-Normal), and the percent of data is not a 50/50 split above and below the mean. This is a situation in which I have not encontered before. 
    This may be rudementary for many, but everyone had to learn it the first time. 
    Thank you for your help.   Doug
     

    0
    #102287

    Dillon
    Participant

    kevin,
    What are the parts that you are testing?  Depending on what they are, you may be able to use a crossed study – BUT – you need to be able to determine whether or not you can say that like parts are the same (statistically).
    For example, we were studying our tensile testing measurement system on strip product.  We had a lot of data from past material both across the width and down the length.  Statistically, we were able to say that parts taken across the width (in any given section) could be treated as if they were the same part.  This allowed us to utilize the crossed MSE design in a destructive test.
    You can run it both ways and analyze both to see what the data tells you as well.
    Good luck.

    0
    #101586

    Dillon
    Participant

    My prior company had 3% of the workforce trained as Black Belts and 1% trained as Master Black Belts.

    0
    #101067

    Dillon
    Participant

    Jimmy,
    You can check this website…they have a certification practice test..the questions constantly change when you take/re-take the test.  You can also check out ASQ’s website.  I believe that they have sample tests available as well…

    0
    #101035

    Dillon
    Participant

    Chris,
    You can perform your study as “Crossed” if you can prove that you have a consistent product from piece to piece. 
    We have done this in the past by control charting various samples down the length of a coil of material.  Using this study, we could state that samples from the material were statistically the same which let us treat the material as the “same” part.
    If you cannot prove this, nested it is.  Hope this helps.

    0
    #100832

    Dillon
    Participant

    kalju,
    Contact MiniTab directly.  They are usually very helpful and will try to solve your problem – if they write new code to correct the issue- they generally will send you the fix to install at no cost.

    0
    #100450

    Dillon
    Participant

    Our training material was part of a purchased package – the consultants provided so much training (basically training/mentoring a group through master black belt) and then handing the material off to us to carry on the torch ourselves (after teaching several classes under their watchful eye).
    My personal opinion is that if it is too much over 4 figures, I would work on developing my own before I would buy theirs…

    0
    #100422

    Dillon
    Participant

    No problem Reva…although I think that confirming is a strong word to use.  As I stated, I am no regression expert.  I will be interested to see if Stan or Darth weigh in on your question…

    0
    #100420

    Dillon
    Participant

    Reva,
    I would recommend doing a simple spreadsheet analysis.  If your data is in Excel or some other type of spreadsheet program, simply sort it by decreasing Y.  Then you can do a quick check to see if a certain level of X results in higher Ys.
    However, I would ask you a question…if Y and X are not significantly correlated, how do you know that operating at a specific range of X will help you increase your Y?
    Since you don’t see any significant level of correlation, you will more than likely not be able to find a specific range that increases your Y.  However, if you can determine that X does not significantly effect Y, you can chose the least expensive operating setting for X and set it there.
    Again, hope this helps…but not sure that it will.
    Doug
     

    0
    #100418

    Dillon
    Participant

    Reva,
    I am by no means a regression expert, but through all of my training as a BB and MBB and in my reference books I can find no mention that normal data is a requirement for regression analysis.
    I live by the following rules when it comes to regression:
    1.  the regression model is based on the data set utilized.  be careful when using the model to predict outside of the region it models.
    2.  measurement systems have to be capable.  it wreaks havoc on the model when you are dealing with a large amount of measurement error.
    3.  historical data used to generate a model may not predict future performance.
    4.  regression should not be used to eliminate potential variables from a DOE – BUT it can be used to add variables.  an independent variable may not look important in a regression analysis, but if included in a DOE where we choose levels outside of our normal operating range – it may be important.
    Not sure if this helps you or not.  My only word of caution would be to scrutinize the conclusions of the analysis – normal data may not be required, but whether or not your data is normal should be considered when drawing your conclusions.
    Doug

    0
    #100285

    Dillon
    Participant

    DOE Man,
    I am always looking for new (or in this case, maybe old but new to me) reads on this type of information.  Since Mario is self-published – how can one obtain a copy of his book(s)?
    Thanks.

    0
    #99926

    Dillon
    Participant

    Micho,
    Is the inspection solely a 100% visual inspection by the human eye?  If that is the case, I would believe that your inspectors “want” to do a good job, but when you have to sit there all day inspecting a product, you’re bound to miss a few.
    Have you thought about running a measurement system evaluation (MSE) on the inspection process?  There are ways to design a MSE to examine a system which deals with attribute type data.  I would recommend validating the measurement system in place before heading down one of the other paths…you may find that that is where the real problem lies.
    For help with this, you might try “Measurement Systems Analysis”. You can find this through http://www.aiag.org.  It is a pretty handy MSE (or A in their language) reference guide.
    Doug

    0
    #99912

    Dillon
    Participant

    brinda.g,
    I have attended several conference at which case studies were presented concerning six sigma and HR.  One of these presentations was given by Ford Motor Co. on how they improved the length of time it took to fill internal job vacancies.  At the time they were also working on their external recruitment process, staff turnover, salary process, and several others.  If you can give me your e-mail I can send you some contact information for the individuals who presented…they may be able ot help you with examples, etc.

    0
    #99815

    Dillon
    Participant

    Mike,
    I totally agree with all of your statements.  I apologize if my response seemed to downplay the usefullness of the tool or if I was not completely clear with what I was saying to Naveen.  I would never recommend doing any DOE, etc. if a FMEA was not done on the process (or if an existing FMEA was not, at a minimum, reviewed to insure that changes were not necessary).  The FMEA is a very powerful tool when it comes to helping solve process related issues as well as machine related issues.  It is also very helpful for establishing control plans, and as you said, making DOEs more successful.
    Thanks for your comments.  I always appreciate reading your postings – unlike some of the others, you always offer constructive help.
    Doug

    0
    #99790

    Dillon
    Participant

    Naveen,
    I would be interested in this too and will be watching to see if anyone else posts a response.
    All of the training exercises I have been through and have facilitated during black belt training relate more to the lean principles of one piece flow which I think would fit the JIT model.  The majority of these lean simulations, however, deal with comparing one piece flow to batch type operations (the signature game, the card game, the beer game, etc.).
    Doug

    0
    #99785

    Dillon
    Participant

    Do you have MiniTab?  If you do, you can run the Process Capability Analysis for your process data.  It will calculate an observed performance and an expected performance.  These three areas then provide you with PPMUSL, and PPM Total.
    I believe that the formulas they use are:
    PPM>USL(Expected ST)=1,000,000(P((Z > (USL- xbar)/sigma within)))PPM> USL(Expected LT)=1,000,000(P((Z > (USL- xbar)/sigma overall)))PPM< LSL(Expected ST)=1,000,000(P((Z < (LSL- xbar)/sigma within)))PPM< LSL(Expected LT)=1,000,000(P((Z < (LSL- xbar)/sigma overall)))

    0
    #99779

    Dillon
    Participant

    Naveen,
    I realized that I did not comment on one team versus a team for each department. 
    I have found that it works best to have some individuals from the department along with individuals from outside the department.  This lets you have a good mix of the “experts” and the people who aren’t afraid to ask the “stupid” questions. 
    Concerning one team versus many teams, it all depends on what your management group is willing to support.  If you have one team, you have to be ready to have some really long FMEA meetings – it is not a simple process to go through and can be made even more difficult by having a lot of people involved. 
    Personally, I think that you would be further ahead to have smaller teams working on each process.

    0
    #99749

    Dillon
    Participant

    Naveen,
    I will give you my opinion and hope that it helps.
    1)  The FMEA can fit anywhere in the DCA portion of the PDCA cycle.  It contains some of the DO part, some of the CHECK part, and definitely part of the the ACT part.  After all, without action a FMEA is waste.
    2)  The best way to handle FMEAs, in my opinion, is to first process map the area you want to look into/investigate.  The process map will contain much of the information that you will need to fill out the FMEA anyway (such as the process steps, potential failure modes (outputs), potential causes (inputs), inspection steps (control points), etc.).  This makes the FMEA much easier to build.  After you have your process map, you can decide whether you are focusing on the whole section or a portion of that section.  Focusing a FMEA on an entire section may prove to be a HUGE task (depending on how complex the process is).
    3)  I believe that, when applied correctly, both concepts focus on the same thing – eliminating waste in the process so that you can supply products to your customers when they want to receive them in exactly the right quantity and of the highest quality possible.  It all depends on the level to which you are applying these concepts.  I think that there will be some disagreement on this subject – BUT the main issue is that they try to focus on the same thing.
    Hope this helps.

    0
    #99574

    Dillon
    Participant

    Arun,
    You may want to run a search under “aliasing”.  Aliasing and confounding are often used interchangeably.
    If you run that search on this site, it does turn up an article which walks you through a case study.  this might help.
    Doug

    0
    #99306

    Dillon
    Participant

    Dan,
    I don’t know of any specific articles and will be curious to see the repsonses you get…
    I have not had the opportunity to utilize p charts very often, but I was taught to analyze in the same manner as a control chart utilizing the rules.
    Will be watching for other posts…

    0
    #99293

    Dillon
    Participant

    Julio,
    If runs are not TOO expensive you may want to consider running more than one centerpoint.  You can get a lot of information just from those data points concerning noise, etc.

    0
    #99091

    Dillon
    Participant

    Hank,
    Thanks for e-mailing me your GB core competency framework spreadsheet.  I like what you have laid out but have a few questions/suggestions for you:
    1) I would recommend adding control plan to your green belt requirements.  They need to be able to develop and implement them just as BBs/MBBs do.  Otherwise there projects can slide backwards as well.
    2) Do your green belts not learn FMEA?  If not, I might recommend adding that as well – as I am sure you are aware of the power of this tool.
    3) Do you teach any lean techniques/tools to your GBsBBsMBBs?
    Thanks again for your model.
    Doug

    0
    #99028

    Dillon
    Participant

    WRV,
    Just a couple of additional thoughts…
    As you utilize the DMAIC methodology, don’t forget to look at Lean as well.  Standard work is typically thought of as a lean tool…but it can be key to solving some issues that are generally looked at as Six Sigma problems (quality issues, etc.).  You can eliminate a lot of variation in the process just by getting everyone to do the job the same way.  Implementation works best when you integrate the two methodologies together as they complement each other very well.
    Don’t forget that, when developing/documenting processes or just working projects in general – it is a team effort.  Get your process experts involved (the operators) and let them develop the standard work for their area with help and guidance from you.  the effort will not last or be well thought of if everything is developed in a vacuum and no one else is involved.
    You can also work process improvements in as you document the current process.
    Again, good luck!

    0
    #99027

    Dillon
    Participant

    WRV,
    I think that some of the real power behind six sigma is DMAIC (Define, Measure, Analyze, Improve, and Control).  If you follow this methodology when applying the tools that you know (i.e. process mapping, spc, measurement system evaluation (more powerful, in my opinion, than gage R&R), FMEA, etc.) you will undoubtedly create a better process.  One of the big keys, which is often over looked, is that all of these tools link together and support one another – use them together to drive your improvement efforts.
    There are a lot of helpful websites, including this one that can help you learn the tools…you can try http://www.moresteam.com which has some pretty basic examples of some of the tools and how to use them in addition to https://www.isixsigma.com.  Both of these websites have downloads of blank forms that you can start out with if you do not any available to you.
    I would suggest that you come back to this forum for helpful advice as you go along.  Many people are willing to help out – just be careful what you do with the advice you get…not everyone is an expert (not that they have to be…the best way to learn is to interact and this forum provides that type of atmosphere) and not everyone always gives you the right advice (but there are many people out here who will point that out!).
    Good luck!

    0
    #98988

    Dillon
    Participant

    Idar,
    When you select the chart that you want to create (I/MR, Xbar/R, etc.), the first box that opens (where you select the data that you want to chart) also has a field where you can input a historical average and sigma level.  If you calculate these based on the data you want to use to set the control limits and enter it in these fileds, MiniTab will plot all data against those control limits and average.
    Hope this helps.

    0
    #98987

    Dillon
    Participant

    I don’t know about other companies or programs…but we have a “story board” concept that gets put up in the area and it basically walks everyone through the DMAIC process as it pertains to that specific project.

    0
    #98925

    Dillon
    Participant

    Has your company used FMEAs before?  If thay have, I would recommend developing your own training module based on an actual FMEA performed in your company.  Getting one from someone else whose business is most likely different won’t be much help, and that is speaking from experience.  We used an FMEA example from the consulting firm that trained us and the green belts that we we trainnig did not see the value right away.  It took application and many hours of mentoring to get them to use the tool correctly and see the value in it.  We then re-wrote the training material (FMEA and many other modules) using actual examples from our own processes and found that the level of learning increased seignificantly based on class participant feedback.

    0
    #98921

    Dillon
    Participant

    There is some analysis required when you go to implement your kanban:
    1.  Demand analysis – you need to know all of the parts that will be part of the kanban system and what the daily demand is (if you are setting your system up based on a day, substitute whatever time period you are looking at).
    2.  Capacity Analysis – hopefully you have a value stream map representing your business.  If you do, the information you need should be on it.  If not, you need to know the # of machines available, the # of shifts they operate, the gross machine capacity, the reliabiilty, cycle time and set-up time data.  Having this information will help you set the number of parts that will be included in one kanban.
    A general equation that can be used (or modified to fit your system) is:
    #of kanban = (Daily Demand x (order frequency + lead time + safety time))/ the container quantity
    You also need to calculate a run line.  This represents how many kanban cards can be consumed before the supplying process starts to replenish.  You can calculate this by:
    Run Line = (daily demand x order frequency) / container size
    The # of kanban is calculated for each part/product in your system.  The number of pieces per kanban is calculated based on your capacity and demand analysis.
    There are a lot of decent reference books out there on kanban systems that should help you set up a system.  Depending on what you are trying to do, I would recommend “Implementing a Mixed Model Kanban System” by Vatalaro and Taylor.  The above information came out of that book.  The book includes step by step instructions for the implementation of a system as well as all of the information that you will need to do your capacity and demand analysis.  The isbn # is 1-56327-286-5.
    Hope this helps.

    0
    #98884

    Dillon
    Participant

    Depending on the company, there are several levels of certification.
    Generally, most companies start at Green Belt and end at Master Black Belt.
    A green belt is taught the majority of the Six Sigma tools with the exception of DOE, advanced MSE (or A), and advanced SPC.  Tools that will be learned incluce:  thought mapping, process mapping, fmea, mse, control charts, intro to DOE, etc.
    A black belt is taught all of the things a GB gets trained in plus DOE, advanced MSE, and advanced SPC.
    A master black belt basically reviews all of these topics with the emphasis on conducting and successfully teaching these topics to potential green/black belts.  A master black belt may also get into more advanced areas of DOE, etc.
    Again, a lot of the training material depends on the company and whether or not the company is using a consulting group.  Most consulting groups teach the same concepts but may do so to different levels of understanding and complexity.  If you get involved in a six sigma program through a consultant group – look for one that provides training spread over several months (not one week) and one that lets the potential green/black belt work an actual project from the company.  This lets the person apply the tools as they learn them plus it lets the teacher critique the student.
    Hope this helps.

    0
    #98706

    Dillon
    Participant

    Dear Quality Witch,
    I understand your problem.  The company I used to work for had many projects that were chosen poorly or had poor results (not what was expected).  I would agree that some of the six sigma tools are hard to apply in the transactional world – but I have seen them used (I was actually a particpant in a DOE designed to study a manual time card system).  I have seen SS used to improve the hiring process at a company and I have seen a DOE which used customer surveys as the response variable.  So there are ways to use the tools.
    My personal opinion of the SS methodology is that the DMAIC process helps drive consistency and lends structure to the problem solving process.  You don’t have to use all of the tools in the tool kit to fix every problem – you just need to pull out and use the ones that you do need.
    I would not abandon the Six Sigma process…just understand that it is a tool in your tool kit – it may not be the right tool to apply all the time (i.e. every problem does not need to end up as a project).
    Hope this helps.

    0
    #98703

    Dillon
    Participant

    Howard,
    I have never seen a framework for a 6M Blitz (don’t forget mother nature).  I have only used the 6Ms through fishbone diagrams to help root cause analysis along.

    0
    #98652

    Dillon
    Participant

    Diana,
    The only way to find this information is to have a very good understanding of your process.  You will need to know how long material sits and how much time you spend changing the form, fit, or function of the product.
    To me, rework is waste – you can certainly track rework separately if it is important to you.  The best way I know to describe value added time is the time you spend actually changing the form, fit or function of your product – all other time, in a pure sense, is waste.
    The information that was I was taught was:  assembly type process time is generally 80% value added, 20% non-value added.  Manufacturing type process time is generally 25% value added, 75% non-value added.  I do not believe that these numbers came from a reputable source – I think that it was based on the instructors opinion/experience of places that he had worked with.
    The only real way to come up with the numbers you are looking for is to analyze your process.
    Hope this helps.

    0
    #98523

    Dillon
    Participant

    Kevin,
    I use Visio and hand calculate all the data/information that goes on to the map.
    Doug

    0
    #98514

    Dillon
    Participant

    Not being a pain at all…as we have been discussing this, I am sitting at my computer working on a new value stream map for my company!
    You can show it several different ways – the key is to make sure that your team understands what the map is telling you.
    Typically if I am creating a VS Map that contains more than one product line, I show shared process steps for each product line.  That way I can show different cycle times, takt times, etc. – generally if I show it all on one step on the map it gets a little too detailed for the operators.
    In your case, I would ask the question – ‘are there any scheduling rules that that process step has to follow?’.  If there are specific rules for the operator to follow – then you definitely need to take that wait time into consideration as it does interrupt the flow of your value stream.

    0
    #98512

    Dillon
    Participant

    No problem David.
    You are correct, you would look at how many times you have to paint it and how long it takes to do the painting and that will give you a pretty good estimate of how many minutes/hours/days of inventory you have in front of that specific process.

    0
    #98510

    Dillon
    Participant

    David,
    Generally, when I build value stream maps with my teams, we treat inventory as blocks of time, not number of parts, lbs, pieces, etc.  If you have inventory levels (mins, maxs, or daily averages) put the weight, pieces, parts, etc, in your inventory triangle but on the time line at the bottom of the map, convert the pieces/parts/weight to a block of time.
    For example, if we have 10,000 lbs of inventory before a process, and the cycle time of that process is 1 lb per minute, we have 10,000 minutes of inventory sitting in front of that machine.
    Hope this helps.

    0
    #98501

    Dillon
    Participant

    I also would be interested in the opinions of others out there…
    I have always analyzed the DOE 2x – once with Y1 and once with Y2.

    0
    #98500

    Dillon
    Participant

    Paolo,
    I may be confusing what you are saying…but let me take a crack at offering some information on how I normally run a MSE (measurement system evaluation).
    First I chose the parts that I am going to have measured.  I typically pick at least 5, but no more than 10 (the MSE can then take too much time and the amount of information that you get is not really worth it).
    The 5 parts that I pick are considered to be the “SAME”.  In other words, they are of like size, shape, weight, etc.  For example, if I am evaluating a scales ability to distinguish between weights, I would chose parts that weigh roughly the same (part 1 = 100 lbs, part 2 = 105 lbs, part 3 = 102 lbs, part 4 = 101 lbs, part 5 = 104 lbs).  These parts may have been produced back to back or from one day to the next, but they are the same part and are considered to be the same.
    I would then have 3 operators measure each part a minimum of 3x each (and maybe 5x if it is not a difficult measurement to take).
    A full measurement system evaluation will then give me the gage R&R results plus a range chart and a xbar chart at a minimum (other charts can be generated as well, but you can get all of the information that you need from these 2 charts plus the gage R&R info).  From this information, I can determine whether or not my measurement system can distinguish the difference between the parts that I have chosen (which are considered to be the same as they are almost identical to one another).
    If you chose 5-10 parts that are “DIFFERENT” you run the risk of your gage R&R coming back and telling you that your measurement system is GREAT when it really isn’t.  For example, using the weight example, if my 5 parts are:  part 1 = 100 lbs, part 2 = 300 lbs, part 3 = 200 lbs, part 4 = 400 lbs, and part 5 = 500 lbs.  The gage R&R is going to tell me that my measurement system is more than capable of distinguishing these parts from one another.  Which is great, if I am only interested in being able to see a difference of 100 lbs in my samples.  But if I am looking for differences of, say, 5 lbs or less, I do not know if my scale is capable of seeing this type of shift.  But if I had chosen my parts to reflect the type of difference I want to be able to detect, my gage R&R would tell me whether or not my measurement system was capable.
    I’m not sure if this helps or not…I may be completely missing the point you are trying to make.

    0
    #98498

    Dillon
    Participant

    MMAN,
    The additions that you make are certainly true.  I like your thinking….
    Doug

    0
    #98461

    Dillon
    Participant

    In that case, my opinion is that it needs to be driven by the head honcho….if he isn’t supporting it…why should anyone else?  Once you get that buy-in..you can drive it through all levels of the organization…we ended up writing it into the goals and objectives for every level of management in our corporation….

    0
    #98458

    Dillon
    Participant

    Bart,
    The best way to get them to promote/communicate six sigma is to get them involved.  You may have a tough time convincing them that six sigma applies to them…but it does.
     

    0
    #98457

    Dillon
    Participant

    MMAN,
    When I first saw your posting, I wasn’t sure how to respond.  I’m not sure that what I am about to write provides you with what you are looking for…so please let me know…
    When it comes to working a project, any type of project (lean or ss), I generally follow the DMAIC model.  I find ss lacking in that it generally takes way too long to work a project and I find lean lacking in that it generally does not have a control element (there is room to argue this point…but at my company, lean efforts usually ended up back where they started due to lack of control).  Therefore I like to integrate the two methodologies together to help solve problems/implement products at a faster pace.  The lean tools can help speed up a ss project and ss tools can help implement better controls in lean projects.  Plus I like the DMAIC structure – you can use it for lean just as well as ss.  It helps provide a little structure to the whole thing. 
    When it comes to choosing projects for myself or the BBs that report to me, I use a Value Stream Map to drive the decision (except new products) – regardless of whether they are ss or lean.  If you’re not improving the VS, why are you working on it?
    Doug

    0
    #98453

    Dillon
    Participant

    Mia,
    Getting that point of view across is a never ending battle.  You will always have people test the system to see if they can get something through.  Having said that…the way that my group has handled this in the past and continues to handle it is through a project proposal form.  Individuals wishing to get a project approved have to submit this form to our steering committe.  If the form contains all of the necessary information (it includes financial information as to projected pay backs, etc.) it can be put on the project list and assigned to a BB or GB.  One of the BB’s or GB’s first responsibilities (during the define phase) is to make sure that the project can deliver what the proposal stated – if not, the committee may decide to cancel the project and assign the BB/GB to another project.
    I do not really have a rule of thumb for what is a project and what isn’t…the main contention that I have with most upper level execs is that if you know the answer – implement it.  Don’t push it through the SS program just to launch a project – it is a waste of everyone’s time and resources – plus it can hurt the SS effort in the long run due to unsuccessful projects.
    Hope this helps.

    0
    #98429

    Dillon
    Participant

    Paolo,
    Six Sigma can be very good – if the supporting process is there and projects are chosen well.  When dealing with upper level management I only ask two things – 1) pet projects don’t get pushed through just because someone “thinks” that they are important and 2) not everything is a project.
    Good luck!  I would recommend visiting this site often – there is a lot of useful information here and a lot of people who are willing to help out if you ask (I know that it has helped me a lot!).

    0
    #98426

    Dillon
    Participant

    Paolo,
    I am a newby when it comes to Lean and Six Sigma practice.  I am a Master Black Belt in both Lean and Six Sigma methodologies…but my company is new to both of these, we have only been practicing for a little over 4 years – so I, as well as my company, still have a lot to learn.  The industry is metals – I am, by degree, a Metallurgical Engineer.
    As far as MSAs are concerned, I have led and worked with teams on 30+ evaluations of various types of measurement systems.

    0
    #98424

    Dillon
    Participant

    Sean,
    You can e-mail/call MiniTab to get your answer – they are generally pretty helpful when it comes to providing the formulas they use to calculate things.  And if you can provide them with ideas on how to do it better (i.e. a different calculation for smaller sample sizes), they will generally work it in in future releases.

    0
    #98423

    Dillon
    Participant

    No problem Carlos.  I hope that you will receive more replies to your post – that is one key point to this forum – you can get a lot of good (and bad) information from people out here – but it is always helpful to get several opinions on topics.

    0
    #98420

    Dillon
    Participant

    Paolo,
    You will miss the opportunity initially.
    In my experience with MSAs, you are first trying to determine whether or not you have a good measurement system that can correctly identify a shift in whatever you are measuring.  Therefore, I typically like to reduce the amount of part to part variation that I know can exist by choosing parts that are from the same batch (and if possible, parts produced close together within the same batch).  This lets me reduce the amount of part to part variation so I get a better understanding of the measurement system variation.
    Having said all of this, the basic rule of thumb that I use to pick my parts is to have them all fall within the range that I want to be able to distinguish between.  For example, if I am evaluating a laser micrometers ability to measure diameter, I would pick samples that are of roughly the same diameter (if I pick 5 parts they would be:  0.07874″,0.07870″, 0.07900″, 0.07800″, 0.07950″).  After performing the MSA, I would know whether or not my measurement system can distinquish between these sizes.  If I were to pick 0.07″, 0.09″, 0.125″, 0.150″, etc. my measurement system would look really good (you can basically tell these sizes apart by looking at them, so why shouldn’t the measurement system look good!).
    Once I know that my measuremeny system is good enough to tell the difference in the size (to the level that I need it to), then I can go out and measure parts from different batches, time periods, etc. and have confidence that my measurement system is showing me the part to part variation that may occur over time.
    Hope this helps…and sorry for the windy response…

    0
    #98418

    Dillon
    Participant

    Paolo,
    I would recommend taking your samples from one batch of the same parts.  I am assuming (which is risky!) that you are trying to determine how well your measurement system is working…which means that you want to minimize the amount of part to part variation that you may have so that you can get a better understanding of your measurement system variation (of course, you can do this with any of the examples you posted, but I have found that minimizing your p to p variation up front by choosing like parts makes the analysis a little easier).  After you have this, provided your measurement system is adequate, you can then test for part to part variation.

    0
    #98417

    Dillon
    Participant

    Carlos,
    I understand your question and would pose the following question back to you…
    “Why not use Kaizen to aide Six Sigma?”
    I have worked a variety of Lean and Six Sigma projects and have used Kaizen to shorten then amount of time needed for a SS project.  If you can get the dedicated resources required for a Kaizen event (which, in my opinionn, is the most difficult part of Kaizen), you can easily complete the DMA portion of the DMAIC SS project.  Then your team basically just has to work on the Improvement implementations and the control phase of the SS project.  I have used this approach with very good results and have cut the average 3-4 month time frame of a SS project to 2 months (we have done some quicker than that).
    Doug

    0
    #98251

    Dillon
    Participant

    Prof. Tom,
    I would hate to see a SS Lite course developed.  Being a MBB I see a lot of ‘certified’ BBs that do not get the whole picture – there are too many people trying to cash in on the SS market by rushing people through training.
    I think that your approach, having real projects for the students to work on from their own companies is a great way to teach the methodology.  The important message to get to your customers is that most BBs, once certified, do work on more than one project at a time.  So they need to realize that the first project is generally the lengthy one as they are trying to learn the tools at the same time.
    Doug

    0
    #56196

    Dillon
    Participant

    Hank,
    I would be interested in seeing your model if you do not mind…
    ddadillon@earthlink.net
    Thanks.

    0
    #98237

    Dillon
    Participant

    Hank,
    I would be interested in seeing your model if you do not mind…
    ddadillon@earthlink.net
    Thanks.

    0
    #56194

    Dillon
    Participant

    Andrew,
    Here is what I have been involved with…maybe it will help you out..
    Candidates are recommended by their manager/supervisor and their past performance reviews are reviewed.  If they are a model employee and possess the desired skills (i.e. they are enthusiastic, eager to help create change, are viewed as a good communicater/team player) thay are entered into the program.
    The training method is a Train – Apply – Review type of system.  They basically learn DM of the DMAIC model during the first session.  AI during the second, and C during the third.  Training takes about 3 months.  Each session utilizes examples of the tools (in class exercises).  Each candidate is assigned to a project when they enter the training program so they are expected to take what they learn and apply it to this project.  At each subsequent session, they give a report out to the class showing how they used the SS tools to get to the point that they are at.  As they learn new tools, they apply those to their project and report out and show how they used them.  They are also assigned to a mentor (sometimes the instructor, sometimes a certified GB or BB) who is available to help them out should they run into problems.
    We have a scoring sheet which the instructor utilizes to gauge their use and understanding of the tools in which they must receive a minimum score in order to be certified.  A 360 review is used with their first team to gage how well they worked with the team.  Based on these results, a GB can then obtain their certification.  If any issues are uncovered (tool useage or team skills) a development plan is put together by the GB and their mentor.
    Hope this helps.
     

    0
    #98235

    Dillon
    Participant

    Andrew,
    Here is what I have been involved with…maybe it will help you out..
    Candidates are recommended by their manager/supervisor and their past performance reviews are reviewed.  If they are a model employee and possess the desired skills (i.e. they are enthusiastic, eager to help create change, are viewed as a good communicater/team player) thay are entered into the program.
    The training method is a Train – Apply – Review type of system.  They basically learn DM of the DMAIC model during the first session.  AI during the second, and C during the third.  Training takes about 3 months.  Each session utilizes examples of the tools (in class exercises).  Each candidate is assigned to a project when they enter the training program so they are expected to take what they learn and apply it to this project.  At each subsequent session, they give a report out to the class showing how they used the SS tools to get to the point that they are at.  As they learn new tools, they apply those to their project and report out and show how they used them.  They are also assigned to a mentor (sometimes the instructor, sometimes a certified GB or BB) who is available to help them out should they run into problems.
    We have a scoring sheet which the instructor utilizes to gauge their use and understanding of the tools in which they must receive a minimum score in order to be certified.  A 360 review is used with their first team to gage how well they worked with the team.  Based on these results, a GB can then obtain their certification.  If any issues are uncovered (tool useage or team skills) a development plan is put together by the GB and their mentor.
    Hope this helps.
     

    0
    #98214

    Dillon
    Participant

    I was taught the 80% ‘rule’ as has been many other people.  My take away was that the 80% number is just a rule of thumb (which I am finding that there are a lot of those).
    I’ve always been a little leary of using regression anaylsis in the first place unless I know where the data came from and know that there have not been any significant changes in the process (otherwise, you can’t depend on the model to predict future performance based on past data).
    Just my 2 cents…

    0
    #98211

    Dillon
    Participant

    Naveen,
    I hope that this will help….this is a little lengthy….
    The majority of FMEAs that I have worked with teams on have been on existing products/processes and the method that I use to develop the FMEA is:
    1.  Create/Review a process map for the product that you are having issues with.  The PMAP needs to contain all aspects of the process (i.e. inputs and outputs of each step, and if possible, the control parameters for critical inputs/outputs).  If the process is very involved (i.e. many, many steps) – you may want to select the critical process steps that dramatically effect the product that you are producing – if you have the issue narrowed down to one particular step in the process, even better.
    2.  Grab the FMEA form (there are many good examples in ‘Potential Failure Mode and Effects Analysis, FMEA 3rd Edition’ by the AIAG group – phone number 248-358-3003).   This reference book has forms for Design FMEAs and Process FMEAs. You can also find examples on this website as well as other Six Sigma web sites.
    3.  Select a step in the process (preferabbly narrowed down from the PMAP).  The outputs become your failure modes (i.e. if the output is consistent adhesive flow, the failure mode is inconsistent adhesive flow).  The failure effects are then potential issues that may be seen by the process or customer if the failure mode occurs.
    4.  Once you have the failure mode(s) identified you can identify the causes – the causes can generally be found in the inputs of the process step.  Then you can identify controls (if you have them! this is the one area where I find most of the problems with an existing process – we just don’t have the controls in place that we need to have) for each of the causes.
    5.  The next step is to complete the ratings and calculate your RPN to determine critical items to attack.
    As I said before, I hope this helps.  There are many ways to develop an FMEA, but I have found that working from a process map is the best way to attack a problem that is occurring in a developed process.
    Doug

    0
    #98207

    Dillon
    Participant

    Thanks McD….appreciate the input!

    0
    #98175

    Dillon
    Participant

    GDW,
    On top of what you mentioned (successful project completion, ROI, etc.) I have used 360s with team members from projects that the BB led – this helps rate the team side of the BB – but 360s can be time consuming.  Project impact should play a major role as well – but you have to be careful not to penalize the BB for poor project selection (I am assuming that you have a committee or a designate who selects projects based on certain guidelines/criteria).
    Doug

    0
    #98143

    Dillon
    Participant

    Bill,
    You’re right….tape does not look the best but it is easy to replace.  Generally we used tape when setting up the process to help test the system (less money, quicker to do, etc.) and then went to more permanent methods after we worked the bugs out – based on out audits, tape always got you in trouble because of the way it degraded over a short period of time.
    Doug

    0
    #98128

    Dillon
    Participant

    sankar,
    In order to proceed, you have to ask yourself one very important question:  “Can ‘different’ parts be considered the ‘same’?”
    There are several strategies which can help you make this work:
    A) You can subdivide the part (which does not sound like it would work for your process).  B) Select parts sequentially from a production lot.  C)  If same samples are not available, use parts from the same lot of production.
    It is crucial that you fully understand the process…otherwise true part variation could be misconstrued as measurement variation.
    Hope this helps.  Doug

    0
    #98127

    Dillon
    Participant

    John K.
    I have been involved in several Kanban system implementations which ranged from boards and cards to control WIP to painted spots on the floor (for skids, bins, boxes, etc.).  My approach has always been to select the easiest system to understand and maintain.  I would recommend one of two systems:  painted spots on the floor or a Kanban board and card system.  There are advantages and disadvantages to both. 
    With the painted spots on the floor, it simply comes down to only filling open spots on the floor and then shutting down AND if you need to increase the WIP, you only have to add more spots.  The problem with this is that if you need to decrease WIP, you have to remove some of the painted spots on the floor which can be a real pain and the other issue is that paint wears off so you have to repaint from time to time.
    With a board and card system, adding to or reducing the WIP level is as simple as adding or removing cards from the board.  The only real problem that I have experienced with this system is that cards can get lost if you allow them to be removed from the board (i.e. if they travel with your tubs).  We have gotten around this by having the employees flip a card over to signal that material has been removed.  When the operation replaced it, the employee then turns the card back over.  They are color coded on theback to signal trigger levels (some cards have yellow backs, some have red).
     
    I know that this was a little wordy, but I wanted to try and be as descriptive as possible.  Let me know if you have other questions or comments.
     
    Doug

    0
    #97377

    Dillon
    Participant

    Here we have a 18 month BB program for which no prior GB training is required.
    –  4 weeks of classroom training
    – final exam (75% minimum)
    –  Two successful projects completed for certification
    Training of GB is not required for BB.
    However training of GB is required for MBB (Master Black Belt) rank.

    0
    #96908

    Dillon
    Participant

    I think that you have gotten a lot of good responses to your question…
    Having been through both…my suggestion would be to utilize consultant training if you can afford it.  The reason for this is the hands on training the candidates will receive along with the experiences that a certified black belt or master black belt can pass along.  Even though the training may target the “middle” of the class, candidates who may be more advanced because of their back ground or who want to learn more can always get extra time with the instructor after class or between classes.  If the consultant is a good one, they do not want to turn out/certify poorly trained black belts as it is a poor reflection on their company.

    0
    #96907

    Dillon
    Participant

    I agree with Rob.  I would think that the answer was 6 but the guard reconized the spy for what he was…
    Are you going to post the correct answer?

    0
    #96906

    Dillon
    Participant

    F Nunez,
    We have implemented “Orange Belts” at the floor level.  This is basically an overview of all Lean Manufacturing Principles as well as Six Sigma Principles.  More detailed information is taught on Process Mapping, FMEA/Fishbone, MSA (or MSE), 5S, and Setup Reduction.  The individuals who receive this training are not expected to work their own projects but to serve as team members/resources to Green Belts and Black Belts who have been more thoroughly trained in the rigors of the DMAIC.  The main reason that we implemented this level of training was to reduce project completion times.  We found that Green Belts and Black Belts spent a lot of their time teaching the tools to team members when working their projects.  By introducing the tools to the employees and providing them with some basic training, we were able to reduce the time spent doing this by GBs/BBs.
    Hope this helps.

    0
    #96905

    Dillon
    Participant

    Barb,
    We have used kaizen in the office to do several things.
    For example:
    1)  We have used the kaizen approach to implement 5S throughout the front office of our manufacturing facility.
    2)  We have used the kaizen approach to trim the time it takes to receive customer orders and create shop floor routings.
    3)  We have also used it to trim the “footwork” of production planning personnel – i.e. eliminated report generation, combined reports, reduced the amount of scheduling that needs to take place, etc.
    The major issue to utilizing kaizen, is getting the resources dedicated for the time period that you have (2-5 days).  There are a lot of reference materials available on kaizen out there and a good place to search for them is http://www.productivitypress.com or even http://www.amazon.com will have many of the same titles (and you may be able to purchase them used).
    Hope this helps.

    0
Viewing 100 posts - 1 through 100 (of 129 total)