iSixSigma

Jamie

Forum Replies Created

Forum Replies Created

Viewing 100 posts - 1 through 100 (of 105 total)
  • Author
    Posts
  • #189294

    Jamie
    Participant

    Ted,There are other ways. Now that you’re working more efficiently, can you now increase your capacity (process more orders per day or even take on another task). Also, what does the reduction in defects do for you – are there costs associated with a defect (follow the trail) – you will no longer be incurring those costs.Also, in working with the government – I’ve been able to justify projects on the basis of “resource recovery” – resources have been taken out through budget cuts without a corresponding reduction in work assigned – the project is justified as an enabler to allow us to recover the resources that were removed.

    0
    #57645

    Jamie
    Participant

    Hello –
    Have at look at our web site at the U.S.Environmental Protection (U.S. EPA)Agency for more information about the use of lean/6 Sigma by governments in the United States. 
    http://www.epa.gov/lean/admin.htm
    A number of U.S. State governments also have good websites which are listed on the U.S.EPA web site and in the U.S.EPA documents.  See in particular   http://lean.iowa.gov/
    I would be very interested in knowing about government agencies, anywhere in the world, that use lean/6 sigma.
    Thanks.
    Jamie Burnett
    U.S. Environmental Protection Agency
    Office of Environmental Policy Innovation

    0
    #170871

    Jamie
    Participant

    George
    I’m redy to share those slides with you (if they are good),may send to you a draft for a BB project..
    Thanks
    [email protected]

    0
    #142362

    Jamie
    Participant

    I did the on line do to time constraints.  I did it through Six Sigma On Line.  Sixsigmaonline.org
    The rates beat everyone both on line and in class.
    Try it.

    0
    #141215

    Jamie
    Participant

    Hey All
     
    I am also working on my final Black Belt project.  I have my actual project that I need to put into the report, just stuck on how to formalize into a report for my final.
    Any thoughts or templates that may be useful in getting started?
    Ciao

    0
    #120768

    Jamie
    Participant

    I believe the rank deficiency is a result of the X’s being correlated. Gear 4 was only used in year 3 so if there is a difference is it year or gear. So if there is a difference in year 3 and gear 4 group what caused it? Not enough data to tell.
    Tabulated statistics: year, gear
    Rows: year Columns: gear
    1 2 3 4 All
    1 90 14 21 0 125
    2 0 66 14 0 80
    3 0 0 0 36 36
    All 90 80 35 36 241
    Cell Contents: Count

    0
    #116500

    Jamie
    Participant

    Terri,
    Our company offers a Six Sigma for Sales training which seems like it addresses the types of points you’re getting to.  It gives an overview of the methodology so that people understand the language.  Then there is a strong focus on the Define phase — mostly VOC and FFU (we have our own Customer Fitness For Use Linkage Model which comes from HOQ/QFD).  We present the tools that can be used that many have mentioned like SPIN, Kano Model, etc.  The Measure phase tools are also taught on a basic level so that they can understand how and where to search and dig for the right kinds of data.  After that, the remaining phases are summed up and they’re instructed to go to their kind GB/BB/MBB for support in any project that might arise.
    I hope this helps some.
    Jamie

    0
    #110844

    Jamie
    Participant

    Thanks a lot for the quick response.  First of all, my project is basically to reduce the amount of negative unproductive time (time that is not billable to the customer, and excludes vacation, sick time, holidays and training).  Examples of unproductive time includes sitting idle, doing paperwork, cleaning out their service vans, etc.
    Because all types of unproductive time are counted in our JD Edwards system as one code, I had to take a sample of the 36 technicians time sheets.  I decided to do 8 weeks worth of time sheets for each technician thus giving me 288 points in my sample.  the values range from 0 hours per week to 16 hours per week.  Right now the process is not monitored so we are shooting for a LSL of 0 hours per week and 2.5 for the USL. Btw, I tried calculating this by day per tech instead of week but most of the datapoints were concentrated at the 0 or 1 mark thus not giving a very good graphical picture in Minitab. 
    Anyway, most of my data is outside of scope and the histogram is poitively skewed with the ZBench being a negative value.  I have read all my books and searched the net for an answer on how to calculate this but have not been able to do this yet so I am getting quite frustrated.  My blackbelt mentor was away so I was unable to seek his advice. Does this pait a clearer picture?  Thanks id advance. 
     

    0
    #103049

    Jamie
    Participant

    To answer the question as simple as possible…”How does a simple t-test for two groups with equal variances equate to the F test?”
    They are equivelent to each other. Try a sample dataset and you will see you get the exact same P-value. Though ANOVA will allow you to test the differences between 2 or more group means where 2 sample t-test is limited to only 2 groups.
    Jamie
     

    0
    #101198

    Jamie
    Participant

    Thanks a ton!

    0
    #101148

    Jamie
    Participant

    I talked to my BB about what we talked about. he said for me to not concern my self with the formal reporting right now and get to my end result then talk about the reports. but I would like to get a jump on him.
    Thanks a bunch my email is:
    [email protected]

    0
    #101140

    Jamie
    Participant

    That makes sense, I am finishing up my analysis and am going to start recommendations. So a status report seems to be the ticket for now.
    I will be expected to have a Control Report by the end of the summer as well. If you have examples, of both that would be great. I agree that I should be instructed on these matters, I am an engineering student that took statistical process control classes just to learn the stuff. So I am not expected to know any of it. Many here at my company were excited to hear that an engineering intern had this knowledge. I am teaching myself the rest.

    0
    #101134

    Jamie
    Participant

    I’m pretty new to this so I’m not sure….I have taken a few classes and they covered all the charts and analysis but never how to report your results. An explination of the difference would be greatly appreciated.

    0
    #95161

    Jamie
    Participant

    Since you are on a Six Sigma website I’m going to take a stab and assume you want to know what GB means with respect to Six Sigma. GB stands for Green Belt which is a level of certification for a process improvement techinique using a 5 phase approach (define, measure, analyze, improve, and control). <>. For Green Belts… this means 2 to 3 weeks of in class training while candidate work a project outside of class. This purpose is to improve something in the area they work in. They must demonstrate results and usually pass several few exams. The certification is not standard so what I describe might be viewed as typical. Many companies self certify and most of the consultant companies listed on this site also offer Green belt certification.
    Jamie

    0
    #95159

    Jamie
    Participant

    Appraiser # 1 matched 8 of 10. ..
    1 10 8 80.0 ( 44.4, 97.5)
    Therefore the expected ratio is 80%. The best guess for appraisers #1’s true agreement is 80%, but… and this is a big but, this is only one sample. If you went and did the study again you might get a different answer. So what this is saying is that you are 95% confident that the TRUE percent agreement for appraiser # 1 is between 44.4 and 97.5. In other words appraiser #1 might be as bad as 44.4% or as good as 97.5%. Taking a larger sample will make you more confident and in turn make the confidence interval shrink.
     

    0
    #93666

    Jamie
    Participant

    The same assumptions for normality apply for a paired t-test as they do for a one sample t-test. A paired t-test is a one sample t-test where the arverge difference of the pairs are tested against 0. The null hypothesis is the average difference between the pairs = 0. The alterantive is the average difference 0.
    Therefor it is really the normality of the differences that you are interested in. Take each pair and subtract sample 1 from sample 2 and plot the differences. If the differences are reasonable normal you are ok with using a paired t-test. If they are not normal my recommendation is use a one sample nonparametric test against the differences where the null hypothesis is the median difference = 0.
    Anyone have different thoughts?
    Jamie
     

    0
    #92998

    Jamie
    Participant

    What is the practical question you are trying to answer? Is it “is the percent of calls resolved different for the two calls centers?” If so a 2 proportion test should work fine.
    If this is your question… the null hypothesis is that the ratios for the 2 are =. To perform the test in minitab simple count the total surveys from center 1 and center 2. Then count the number who replied positive (resolved) for center 1 and center 2. Then goto stat->basic stats->2 proportion….summarized data. Enter in number of survey into trials and resolved number into success. A small p value (ie. .05 or smaller) in results indicates a large proabability the difference is real, not just by chance.
    If your question is different, let me know.
    Jamie

    0
    #92896

    Jamie
    Participant

    square the entire thing and it makes more sense…. 1.41 is just the square root of 2. I did the derivation once and finally realized the the 2 simply represents each side of the normal curve. Its hard to explain in words but what you are doing is coming up with how many distinct “buckets” or categories your gage can identify compared to the part to part variation. The ratio of variances are used to do this. But this ratio only describes one half of the “buckets” so you must double this (hence the 2). When you take the square root of the doubled ratio of variances you get 1.41 * Sp/Sm.
    Hope this help, Jamie

    0
    #89084

    Jamie
    Participant

    TAT? not sure what that is but it looks like you have monthly data. Many times monthly data really isn’t monthly data its just how its tracked. See if you can get a more granular dataset. In my world that means I take someones monthly production figuress (14 data points) and look for the data that made up the 14. Look for weekly, daily, shift, product runs, etc…if you can’t get historical details use opinion tools like XY matrix, brainstorming, fishbone, etc…. to gather a teams opinion on what details you need to measure and impliment a data collection scheme to gather it (I know that’s really easy to say and a lot harder to do).
    Hope this helps, Jamie
     

    0
    #87000

    Jamie
    Participant

    I did a little research and here is what I was remembering….
    for a Stratified (grouped) Simple Random Sample the Standard Deviation of the Estimator for the Population Mean is the formula I supplied (assuming proportional allocation). Now this is the variance of the mean (BillyJoeJimBob, the std err of mean of grouped samples), not the individuals so I do not believe the formula I provided is the correct application (though it does apeear to be the correct formula).
    But, in thinking further I’m led to question the use of the pooled std deviation for the same reasons. The definition of pooled std dev from this website is….
    Pooled standard deviation is the standard deviation remaining after removing the effect of special cause variation-such as geographic location or time of year. It is the average variation of your subgroups.
    I take this to mean that the pooled std dev is the “average” within group variance (note average is in quotes). The original poster asked for an estimate of the population std deviation. But Total = Within + Between and pooled (if I’m reading the definition correctly) only estimates the between.
    If the pooled std deviation estimates between variation then this would underestimate the true population std deviation. One might argue this to not be true if the groups were truly random samples each from the entire population, but since they are grouped I’m led to believe its for some reason (probably time based, i.e. monthly reports). Therefore they aren’t random samples.
    If this is true then one would need to add back in the variation found between the group means and the overall mean. I’ve got what I think it is in my head, but don’t have the time to look it up so I’ll leave it for now.
    To be honest I kinda hope I’m wrong and its as simple as just using the pooled std deviation.
    Jamie
     

    0
    #86959

    Jamie
    Participant

    Fin, Why is your data not normal?
    Skewed? Multi-modes? Granularity? or Kurtosis?
    Box-cox only works with skewed data. There are a few reason why box-cox will fail (i.e. small min to max value ratio), but most of the time I see it fail is because someone used it on a data set that shouldn’t be transformed in the first place.
    Jamie
     

    0
    #86955

    Jamie
    Participant

    Zilgo, Are you saying these 2 equations are equal?
    VAR*N=VAR1*N1+VAR2*N2+…+VARk*Nk
    VAR*N^2=VAR1*N1^2+VAR2*N2^2+…VARk*Nk^2
    A simple yes or no will work fine?
    Jamie

    0
    #86937

    Jamie
    Participant

    Gabriel, I think the formula for weighted variance is actually….
    VAR*N^2=VAR1*N1^2+VAR2*N2^2+…VARk*Nk^2
    or…
    VAR=(VAR1*N1^2+VAR2*N2^2+…VARk*Nk^2)/N^2
    its weighted by the subgroup size “squared”.
    Can someone else confirm this?
    Thanks,
    Jamie
     

    0
    #86757

    Jamie
    Participant

    Denis, It wasn’t until I read the following explanation that it made sense as to what the calculators (or Sigma Level Tables) were doing….
    “The calculator assumes you are inputting longterm DPMO and want to calculate short term Sigma Level”.
    If you truly have short term data (i.e. data from a very narrow inference space… same shift, batch of raw materials, etc) then you can look up Sigma Level up directly in a Z-Table or use the calculators but subtract 1.5 (becuase they added 1.5 assuming you had long term data). All the calculators do is “lookup” a Z-score and 1.5.
    Jamie

    0
    #86658

    Jamie
    Participant

    Xin, You are after a few things…. First is, what is the percent of time the operators agreed with themselves. So assuming you had each operator inspect the same part 2 different times, what is the percent of times that they recorded the same answer for a given part (that is agreed with themselves). Now take those responses where each operator agreed with him/herself and calculate the percent that all the operators agreed on the same answer. Now where all the operators agreed with themselves and with each other, what is the percent that they agreed with a standard (assuming you have a standard). This is the ultimate number you are after. Practically its the percent of the time when all the measurements were the same (% total agreement=operator agrees with self + all operators agree with each other + all operators agree with standard).
    Minitab (at least the current version) can do all of these calcs for you or you can do them within excel.
    Hope this helps,
    Jamie
     
     

    0
    #85809

    Jamie
    Participant

    Nested design is appropriate, but the real “trick” in destructive testing is to get samples that are homogenuous enough that they can be treated as the same unit. For example take a chemical batch and split it into 6 different samples (3 operators x 2 repeats). if you gage study called for measuring 10 different parts then you would get 10 samples where each of the ten were actually made up of 6 similar items. If the within group variation is very close to zero then the gage study will work fine. If you get a good gage using this method then you are ok, the problem arrives when you get a bad gage because you won’t truly know if the gage is bad from within group variation or gage error.
    Jamie

    0
    #84951

    Jamie
    Participant

    Hemanth, Calculating Sigma Level is much easier and yet much more difficult then training material tends to imply. Let me try to elaborate. The method used in Sigma Level Calculators or lookup charts assume the following….
    1. You have collected long term data and wish to report short term sigma (the default).
    2. You are able to estimate a percent defective. Assuming normally distributed data its simply the tail area beyond the nearest specification limit.
    3. Using the percent defective (finding this in the body of the table) you look up the Z score (the number of standard deviations to a spec limit) in a standard normal table. This method assumes normality, but can be applied to applied to nonnormal data as well as long as you have a way to estimate percent defective.
    5. Now you have long term sigma level. To estimate short term sigma level the generally applied practice is to add 1.5 to this. This essentially is a correction factor which acts to estimate short term sigma level for a centered set of data with a std dev from a narrow inference space (short term population).
    Its really just that simple, collect long term data, estimate %defective, look up this value in a standard normal table and add 1.5. Now there are methods to actually collect short samples (collectively forming long term data) and actually use the groups of short term data to calculate a better estimate for short term sigma level (instead of just assuming +1.5). Also there are methods for transforming data that is continuous, but not normal. This is the “much harder” part I was describing. But the general case is very simple.
    To address your post, I believe what the material is referring to by shifting is the method of actually using groups of short term data to estimate a short term std deviation (this is within group std devaition) then using the long term data to determine the true center for the population. You refer to this as a target (but I do not believe this would be the target for your project, but the over all mean for the population, ie where is the overall process currently centered at). Now if you are talking “potential” capability that is different. But I’m going to leave this discussion how “how to I calculate” the short term Sigma Level for an existing process. 
    Now can you use a single sample of short term data to estimate sigma level. Yes, but I generally recommend that this is not done unless you have nothing else. The reason being the short term data is a very limited sample that can and most likely is a very poor estimate. Its just not enough information to tell you what you want to know (generally speaking).
    Hope this helps,
    Jamie
     

    0
    #84582

    Jamie
    Participant

    Curious George, Let me answer what I think you are asking….when comparing 2 means the t-test (w/pooled std dev) is equivalent to an F-test. To demonstrate this try running the an example using both, you will see that the p-values are identical.
    The difference though is that ANOVA can be used to test differences when there are more then 2 groups. See previous post for the disadvantages of doing mutliple t-tests to check for difference when number of groups > 2.
    Jamie

    0
    #84170

    Jamie
    Participant

    The standard deviation for the overall should be the same if you are running a capability analysis (normal) and doing the transformation within the options window.  Make sure you are looking at the correct standard deviation.

    0
    #84111

    Jamie
    Participant

    Ben, Glad I could help, sounds like you have what you need. One last passing comment… minitab can calculate P/T for you. Under options there is a place to put in the process tolerance. If you do that it will include P/T in the analytical output.
    Best of luck, Jamie

    0
    #84097

    Jamie
    Participant

    Andrea, It sounds like you’ve been given a business metric and not a process metric to work on….the problem is that business metrics really aren’t at an actionable level. Unless you’ve got a mature deployment and knowledgable Champion this is very common.
    Is it just the price that a vendor charges you that you are working on?, I’m unclear as to how one is supposed to improve this except use find alternative vendors or buy in bulk or use consignment buying (but consignment won’t reduce the price, just improve cash flow) etc. Now yes you could approach the project this way and might be able to make a difference but it will make for a rough training project. You need to ask yourself what makes up the cost of poor quality associated with spending for raw materials (I’m calling your parts raw materials). Is it you have a high scrap rate for some parts so we buy twice as many as we need… is it that we have tremendous inventories, do you have poor cycle times either in assembly or in delivery of parts ect. Once you get it down to an actionable level make this your primary metric.
    Getting projects to an actionable level is really the champions role (as I see it) and its crucial for training projects, but again many champions just aren’t really up to the task so you get stuck with… improve my profits types of projects. Consider the extreme… dear Black Belt Trainee your project will be to improve 4th annual profit by 10%. When you get projects like this the candidates are left scratching their heads… how do I do a gage study on profit, what’s my capability… I’m confused.
    Maybe I just haven’t had my cup of coffee yet and don’t understand the problem. So could someone else give an oppinion.
    Jamie

    0
    #84096

    Jamie
    Participant

    Ben, Sorry to not be more clear. What I meant by P/T is the precision to tolerance ratio. For your example I believe your products should be +-5/16 of an inch. If this is true then the tolerance is 10/16″… this is the range of the spec limits. The precision is generally 5.15*S(gage). The 5.15 corresponds to the number of standard deviations that cover 99% of the variation of the gage (not sure I said that exactly correct, but thats close)… this is the same as +-2.575 std devs. So to get P/T just take 5.15*S(gage)/(tolerance). If you have a gage with S(gage) = 1/16″ then P/T would be ((1/16)*5.15)/(10/16). This would mean your precision to tolerance is 51.5% (not too good). Now be carefull here too because the standarard of 5.15 can be changed to whatever you want to use. So P/T tells you how good the gage is for the range of the spec limits.
    Hope this helps, Jamie

    0
    #84093

    Jamie
    Participant

    First ask yourself what you want to know… then determine the test you should use to answer that question. You need to form the question as a hypothesis. 
    The two tests you are asking about apply to testing differences between 2 groups. So, do you want to know if the mean of the two groups is different or is the variance of the two groups different. If its the means of continuous normally distributed data you should use the 2 sample t-test. You might first test if these 2 groups have different variances because you can you the pooled std dev for the groups in the 2 sample t-test to make the test more powerfull.
    If your data is in the form of percents…, for example if I want to test the percent of defectives for shift 1 vs shift 2, then I would you 2 proportions test.
    It all depends on what you want to know and what form your data is in.
    Jamie

    0
    #84090

    Jamie
    Participant

    Leon, Gage studies can hardly to boiled down to one number. The true complexities can not be easily covered in one post, but let me give you a few thoughts. First I must ask this what exactly is the 10% you report (always best to be clear). Is it percent contribution, percent study or precision/tolerance. All three are reported as %’s and will typically yield very different value, make sure you are clear on what is being reported.
    Now once you know this, you can interpret it within the context of the gage study. Yes the percent contribution and percent study apply only the the range (really variation) you tested in the gage study itself. If the range of values (really the variation) for the gage study are different then the process then you can not apply this percent directly back to the process. So if you have chosen to measure parts that vary significant higher then the process this number will appear much better then it really is, the converse is true. A common mistake is to measure different products and include them in the study, this causes the part variation to be huge and conversly make these percents look very small.
    Two good number to look at are the precision (5.15 * S(gage)) and precision/tolerance. These values are not influenced by the range of parts measured. These often tell me more then any of the previous percents that were described.
    Hope this helps, Jamie

    0
    #84089

    Jamie
    Participant

    I remember examples in a book for a builder in Texas (I think) who used Six Sigma with very good results. I can’t remember which book it was in though (possibly The Six Sigma Revolution by Eckes, but that just a guess and I don’t have the book here to check). Does this ring a bell with anyone?
    Jamie

    0
    #84088

    Jamie
    Participant

    Glad I could help…ok what I meant by transforming the spec limits…. Lets say my data is the time it takes to process an order. This data is bound by 0, we have a lot of orders that are processed quickly but some orders take more time and there are a few order that take an extraordinary amount of time. Lets say this creates a distribution that has a peak near zero but has a long tail to the right. I find that taking the square root of this data pulls the tail in so that the new data set passes normality tests (but the untransformed data does not). If i have a spec limit of 25 minutes (customers are just not happy if the processing take more then 25 minutes), before I can use the transformed data for capability, I need to transform the spec limit. So I take the square root of 25 which is 5. So I can now use the transformed data for capability but I enter a spec limit of 5 (not 25) since I took the square root of all the data I must also take the square root of the spec limit.
    The following might be confusing, but I’ll give it a try…
    Minitab has an option to do all this for you if you know the lambda. In capability, under options (I think) you can specify a lambda. Now you use the untransformed data and minitab will transform it for you and it will transform the spec limits as well (using the lambda you told it to use). The other nice thing is that if you use this method it will show you a small histogram of the untransformed data in the upper left hand corner. I prefer the output from this method, largely because it makes it very clear that one is looking at transformed data.
    Best of luck, Jamie

    0
    #84066

    Jamie
    Participant

    Using a nested design is appropriate, but the real question I think you are asking is how do I take multiple observations around a sample which is destroyed? Seems impossible and it is. But we can get close by selecting a sample produced very close in time under very same conditions so that you can assume these pieces are really the same (this is the key). We then consider any variation (between these closely selected parts) to be attributed to the measurement system. Sometimes this works very well sometimes not. Consider a chemical batch I could take one sample and subdivide it into 6 samples (to complete my gage study I might take 10 of these batches across the range of process variation subdivided into 6 subsamples each). We treat each group of 6 as if they are really the same sample. I now have enough subsamples for 3 operators to do 2 tests each. Any variation is assumed to be either repeatability or reproducibility. The nested design drops out the oper*part interaction since you truly aren’t retesting the same part. How well does this work? It all depends on how well you can select subsamples that really don’t vary. The good news is that true part variation within your subdivided samples will make your gage look worse. This is good because it is a worse case scenario so if you can get an acceptable gage using this destructive approach, chances are your gage is really a little bit better and you are good to use it.
    Hope this helps, Jamie

    0
    #84065

    Jamie
    Participant

    Before blindly applying this tool, we need to consider some things….
    First you need to understand why your data is not normal and what you want to do with your data. I think you’ve answered what you want to do with your data in the title. You wan’t to estimate the percentage of defects beyond a spec limit and use this estimate to determine a Z score or Sigma level.
    Next I assume your data is continuous, if you are dealing with discrete data a transform is not appropriate. So if your data is cont.,  now is it smooth (meaning no large humps)? Also is your data skewed?….meaning it has a “tailed” pulled in one direction. Before going further please answer these questions.
    If you have answered yes to these questions a transform may work well to estimate the “tail” area beyond a spec limit.
    So what a transform does is raise all of your data to “some” power. An example would be to take the sqaure root (i.e. raise each observation to the power of 1/2). So what does this do and why do this…..well what this does is “pull” your data towards towards 1 (I use the word “pull” for lack of a better term). The further the data is away …. ie the points in the tail… the more it “pulls”. Consider this the sq rt of 1 is 1, meaning no pulling. The square root of 4 is 2. The number is reduced by 50%. The sq rt of 100 (a point in the tail) is now 10, reduced by 90%. If you correctly identified that the reason for nonnormality the resulting dataset if often normal and you are free to use normal tools on them. Warning if you transform your data so also need to transform your spec limits before performing capability analysis.
    So how do I know what power I should raise my data to? Certain shapes lend themselves to certain powers, so an experienced analyst can often use simple rules and get a meaningful transformation. But there is also a method called the Box-cox transform that will “search” for the best lambda (lambda is the power).
    Be warned there are a number of pitfalls and easy places to make mistakes. If you haven’t had formal training in working with nonnormal data or don’t have a local expert I’d approach it extremely cautiously. I would not go try it if all the info you have is what is in my post.
    Also, I can’t say this enough, if ones only knows that the data is not normal blindly applying a box-cox transform is not appropriate. This is probably the number 1 mistake I see people make with nonnormal data. I can think of 5 reasons for nonnormality and box-cox only works with 1 of the 5 reasons.
    Hope this helps, Jamie

    0
    #84030

    Jamie
    Participant

    This is interesting, I’ve tended to find myself using the words “continuous improvement” implace of many of the quality type objective words you are presenting. I’ve not known why I did this, but it does seem to reflect the American culture better.
    I ask why is “continuous improvement” important…. its because if we don’t jump on this opportunity to improve then competitor down the street will (and probably is). When put in the context of a competition for business it seems that American’s can easily buy into this. Its in our nature to compete.
    When discussing perfection, the response I see most is “what’s that going to cost… whatever it is I can tell you its not worth it, no customer will pay for perfection so why try!” Striving for perfection doesn’t make sense but competing for business by improvement sure does. Or that’s been my observations.
    Jamie

    0
    #84028

    Jamie
    Participant

    Let me make a guess “it needs to be concise but eye-opening and convincing” means you have to make your case during a senior staff management meeting in 15 to 30 minutes. If this is true maybe what you should do is make your case for a pilot program. What this does is reduce the senior manager’s risk, they probably think in terms of $. It won’t cost much to impliment a pilot program in one department or in one small area. Use the results from the pilot program to make your case for implimenting an oganization wide SPC program. The actual numbers from your organization will mean a lot more to them then if you use examples from other industries.
    The other idea is to use a demo of some sort. Is there a way you can use some product that is specifically set up in an order where there has been a change. Let them take samples, you create the plots as they go and have them tell you when a change happened. Maybe as simple as # of grey vs white marbles in a seriers of jars. Don’t let them see what’s in the jar, just let them reach in and grab a sample. They could collect samples and use a P chart to see if the ratio changes. Now have them contrast this vs 100% inspection of the marbles. Which one of these methods costs less. Warning I haven’t done this so you might want to try it before using it.
    Jamie

    0
    #84018

    Jamie
    Participant

    It looks like the joint probability of getting 2 parts from process A and B that are both good is 76% * 94% = 71.44%.
    Then the probability that two pieces will be glued together correctly and are good from A & B is 71.44% * 95% = 67.9%. If this is true then its not really any different. Its just 76% * 94% * 95%.
    Can anyone confirm this?
    Jamie 

    0
    #84017

    Jamie
    Participant

    Be warned that transformations only work for skewed smooth continuous distributions. There are many other reasons for nonnormality and doing transforms on them is not usefull. Before attempting to transform data you need to understand the reason your data is not normal. Data like time between calls may indeed fall in this category, but it may not. I say it may becuase it has a natural boundary of 0 which can cause skewness.
    One common accepted practice is to simply take the percent of calls that went beyond a spec limit and look this value up in a standard normal table (add 1.5 if appropriate). Be warned that this method requires much more data to get a reasonable estimate. Does time between calls have a spec limit? Is there a point where the time is too long?
    Jamie

    0
    #84016

    Jamie
    Participant

    Charles, Dam I had a long reply typed and I hit the back key and lost the entire thing… oh well maybe thats best. So a quick summary… appearently this topic isn’t new the author states “Many statisiticians consider a control chart to simply be a sequential test of hypothesis procedure with an unbiased uncontrolled type I error rate.” Now he does warn against taking this simplistic view, but at least it seems there are others (statisticians) that take the hypothesis analogy a step further by saying it is a true test. I actually feel good I’ve got company.
    I did find the discussion on assumptions of normality, mean, and std deviation to be disturbing. From a quick read (and maybe thats the problem), the author states these are never known. We only have estimates so the exact probabilities aren’t known either. So applying them is not appropriate (I might be misparaphrasing). If one accepts this to be true then this would tend to invalidate the use of inferential statistics in process analysis. Yes we violate certain assumptions to apply inferential statistics in process analysis but we knowingly do this accepting the risk, it doesn’t make it useless. But frankly I have never seen an application of statistics where it was certain that the base assumption were not violated.
    At the least this has been very interesting, but the bottom line is I’m not sure how changing this small difference in intepretation will change my application of the tools so its probably best to leave things as is.
    Regardless, I truly appreciate your research and posting.
    Thanks, Jamie
     

    0
    #83993

    Jamie
    Participant

    Sarah, Thanks for the info. IT and Six Sigma go hand and hand. I think there is a strong link, IT manages the data while Six Sigma uses it. I employ a simple model. Data->Information->Decisions->$. If you become employed by a company that uses Six Sigma your research will go along way in helping understand what is important with respect to IT.
    If you are just trying to figure out “what is this Six Sigma stuff?” I recommend 2 very quick reads (an hour or two)… The Power of Six Sigma, by Chowdhury and Leaning into Six Sigma, by Wheat, Mills, & Carnell. The first is service related while the second is manufacturing. They won’t tell you everything (thats not their purpose), but they do give you a decent idea of what Six Sigma is about. They are also written of stories so they are a lot more fun to read then a text book.
    Best of luck, Jamie

    0
    #83989

    Jamie
    Participant

    Sarah, I think its a little confusing since the link is methodoligies, but once you click on the link you will notice that the page is actually titled Six Sigma and Quality Methodologies. Some of the links refer directly to six sigma methodologies while others are more general referring to quality methodologies or initiatives. If you explore the “Six Sigma” link under methodologies I think you will find what you need…in particular try think link.
    https://www.isixsigma.com/library/content/six-sigma-newbie.asp
    What level of education are you? I’m curious as to what level students are researching Six Sigma.
    Jamie

    0
    #83983

    Jamie
    Participant

    Tom, Ill give it a shot…. I believe Marc is warning you about the distinction between soft and hard tools. Both can be very important when used correctly. You will notice that the Six Sigma approach provides both types but in a specific order. The soft tools are applied early on during the project. The soft tools are designed to give a team a structured manner thru which they can collect experience, knowledge, and thoughts about the way a process works. Once you identify a problem area to work on we often don’t know where next to gather data. The soft tools give you a likely path (but its only a likely path). To do this a team might use an XY matrix (cause and effect matrix) to gather opinions about what causes the primary metric to change. This one tool can gather much of what a teams “thinks” makes a difference.
    Is it time to go make improvements based on these opinions?… generally no (this is what Marc is warning you about). We then use this list of potential important factors to begin to collect data and analyze. These actual analyses using real data are known as the hard tools. Its only thru the hard tools that can we say with a specific certainty that the factors do or do not make a difference (I’m talking about hypothesis testing). Its the results from the hard tools that we use to decide what we will change to achieve the desired results.
    You may need to understand how the different factors interact with each other before you can truly understand what levels these different factors need to be at, if this is the case you would use a DOE to determine this. Its a specific progression from soft to hard tools. Its important to understand what you gain from each and when to apply them.
    A funny example, an operator was actually certain that the place he parked in determined how we would run that day. He needed his “lucky” spot. He would rank this high in importance. The soft tools might show this as important, it doesn’t mean it is. If we used only soft tools, then a team might decide they can make an impact by giving him a dedicated spot. Now this might not cost a lot in time and money, but will it really make a difference. Only data and hard tools can prove or disprove these types of opinions.
    Hope this helps,  Jamie

    0
    #83981

    Jamie
    Participant

    Charles, Thanks for the post … it actually allows me to leave this long thread feeling pretty good. I’m off to learn more. Thanks to all for the great discusion.
    Jamie

    0
    #83979

    Jamie
    Participant

    Ben, One of our business units produces lumber so I’m use to seeing lengths which is why it jumped out that you were probably dealing with seperate products. One question I have is (to be certain as to what we are looking at) the %’s you provide are they P/T, % contribution or % study. Also consider how capable the supplier is. For example if he were close to six sigma (I’m guessing he is not) then you only need a measurment system that picks up gross changes since you are so far from the specification limit (again I’m guessing this is not your case). From the numbers I see, I’d say its marginal. You now have other things to consider. I’d ask for the actual gage study. Its very easy to do a gage study that makes the gage look very good when indeed it is not. For example what is the range of parts that were measured. If the gage study was done on a wide range then the percent variation would go down (this is not true for P/T, the range is identified by the tolerance which is fixed). Also a course measurement system can make a gage look artificially good. For example ask operator/inspectors to measure to the nearest 1/4″. You can see they they could probably all do this very well. I’d say all operators could repeat this measure and reproduce each other. For any part if they measure it exactly the same then your % would be 0. But this doesn’t mean the gage is good. Again measuring to the nearest 1/16″ appears marginal to me. Also if he reports % contribution, but you think you are looking at percent study then you might draw then wrong conclusion. Just some things to consider.
    Jamie

    0
    #83972

    Jamie
    Participant

    I agree with what Gabriel has said here (i need to find a post of yours I dissagree with, that would be more fun). To restate all measurements with be discrete because of the finiteness of the system, this does not make the variable you are measuring discrete. You are still measuring a continuous variable.
    But, let me question this…. does the product truly vary from 80 to 150 in length? or do you really receive different products that vary from 80 to 150 where for a particular length you have a tolerance that is much smaller. For example, do you get products that are 80, 90, 100…150 units in length and you are trying to determine the capability of these and the specification of each product is +-0.01 unit. If so then apply Gabriel information with respect to tolerance to help undestand whether the gage resolution is acceptable. I hope I didn’t digress, I’m not positive of your original question.
    Jamie

    0
    #83971

    Jamie
    Participant

    Gabriel, I enjoy your thoughts, thanks for all the posts. Sorry for the misspelling… and for the books its Jamie (not Jaime :)).
    Jamie

    0
    #83944

    Jamie
    Participant

    Stan, I’m assuming you are refering to Type I and Type II error that was posted by Charles. If so (in no statistical terms)…Type I – we conclude a difference, i.e. we say something has changed when indeed it has not. Type II we conclude there is no difference, i.e. nothing has changed when indeed it did.
    Question, I’m assuming we have been talking about the single test (or OOC condition) where we find a mean outside of 3 std deviations. Is it possible that the language is very specific where it does not refer to type I or type II error because this is just one of many tests (there are what 8 common) many of which look at time dependent relationships. I would say that if we collectively look at the 8 tests then I’d say we can not make the same analogies about control chart tests and hypothesis testing or t-tests that have been discussed. I’m applying my thoughts to only the one most standard test.
    Jamie

    0
    #83938

    Jamie
    Participant

    Charles, I wanted to thank you for your references. To be fair I need to really research “the experts”. I must say though you concluding I’m wrong almost seems to further my point (but I doubt this will add much to convincing you). Sure there is always a chance of being wrong when you accept the alternative hypothesis. That chance is called alpha risk, the risk we are willing to accept. If you don’t conclude there is a high probability something is changed then why go investigate it. Would changing the OOC limits not change this risk (say to 2 std dev’s instead or 3)? Since I’m not enough of a mask wiz to be able to derive the probabilities (I’m assuming the true probabilities exist) then I must say I can’t say anything more then Gabrielle has which it is an anology. The only thing I can add is I’m not one to easily accept something because an “expert” (even those that were the original inventors) say its so. Now if they provide emperical evidence or mathmatical proof that another story (and if I do my research I my find that they do). You can’t post entire books, but what you did post certainly supported your point. Thanks for the discussion.
    Jamie

    0
    #83870

    Jamie
    Participant

    MAX, about all I can say is well… dam:) The 2 methods are not mathematically the same so could they yield different results. Is the DOE method prone to error (meaning you might have made a mistake in the calc) I’d say yes, but I’m assuming that you checked it over it you think you did it correctly. Its does have a number of steps and you need to make sure you use the correct response variable. Chances are you have 4 or more columns you could use for response…. the original observed response, the residual, the individual variance of the response (i dont know what else to call (obs – predict)^2 * (n+1)/n), and the log of the variance. Its possible you are looking at the wrong respose, response should be log(var). Assuming all is right I would look at the main effects plots and the interaction plots (for log(var)). Also plots try plotting the straight residuals against each individual factor. Can you see a difference in the variation of the residuals which exhibits a different range for one level vs another. This might tell some of the story. If Bartlett’s shows a difference for one factor I would really expect to see it in the graphical analysis I’m describing. Possibly also consider the p-values in the DOE analysis. Are they “close” to significant. How different is the p-value from DOE vs Bartletts. Also are the within groups log(var) values normally distributed. Taking the log is done to make them normal, but its might not do so.
    I’m trying to think off the top off my head quickly since I have an appointment, but possibly some of these ideas my help ferit out the inconstency. Though I’m leaning towards Bartletts since it did prove a difference.
    Jamie

    0
    #83864

    Jamie
    Participant

    I like Gabriels interpretation or analogy. The OOC test for any sample mean beyond 3 std errors from the mean sure seems to use all the same compentents as a 1 sample t-test where there is an alpha risk (1-.997) you can compute a beta risk (you have sample size, delta/sigma). You have a target (the process mean) you are testing it against, a mean and variance. You have a null hypothesis: sample mean = process mean and a null hypothesis: sample process mean. You are asking whether this sample could have come from the same population. If your answer is reject the null then you conclude the process has changed. How is this really so different that you say zilch, nada, etc… does one really need to read several volumes to explain the difference? I think Gabriel did a nice job of justifying his thoughts, but I really haven’t heard a counter arguement.
    One might be able to show it isn’t mathematically the same as a t-test (though I’m not so sure of this), but I can’t imagine how one could say it isn’t a hypothesis test.
    Jamie

    0
    #83861

    Jamie
    Participant

    MAX, My appologies …. I just reread you post and see that you are clear on ANOVA (my first read was that you were asking if you could use ANOVA). But I did eventually answer your question, if you assume no interactions then you can use Bartletts test for equal variance, one factor at a time.
    Jamie

    0
    #83860

    Jamie
    Participant

    MAX, Since noone has responded I’ll give it a shot. My answer is no you can not use ANOVA to test for differences in variance. On the contrary ANOVA assumes = variance within the groups and tests only the means. My suggestion would be… if you have multiple factors and can assume no interaction then all you need to do is use Bartletts test for equal variance for each factor one at a time. But you may be going out on a limb by assuming no interaction especially for 2 way interactions. The squared difference log approach really isn’t that hard to do once you have the data. If you have the data collected in a mintab worksheet it takes a few minutes to do.
    A note for estimating number of reps, you will want to do this looking up the ratio of the 2 variances that you want to detect in a F table (I’m pretty sure minitab can’t do this for you), where the numerator df = denominator df. The degrees of freedom +1 will be your sample then… then divide this number by (the number of corner points in your experiment/2). So to pick up a 4 fold increase in variance for a 2 factor experiment I look up in the F table 3.79 (the first number less than 4). I get 7 df’s or 8 sample size. I divide this by 4/2 (half the number of corner points) and I get 4 reps. I did this quickly but I think I got it right.
    Hope this helps, Jamie

    0
    #83858

    Jamie
    Participant

    Erik, Thanks for the reply. I was really more assuming common sense would apply, but your point is a good one. Now assuming there is no common sense reason for the data to have a better lambda (maybe one exists and I just don’t have the common sense I think that I have), the data is very smooth, has no outliers, but is skewed to the right because of the natural limit of zero. Many processes act like this. I can not see a reason why box-cox would be inferior to my “simply trying numbers at random”. It does a search for lamba between -5 to +5 (at least minitab does I’m not sure if this is standard) trying to find an optimum. Since the data is smooth I see no reason that it would find a local optimum. So my only conclusion is that its criteria for judging normality is significantly different from mine. In the case I’m describing it was extremely obvious. The only other thing I can think of is in the subgrouping. I generally judge normality based on the individuals and box-cox uses subgroups unless you specify subgroup size = the entire sample. I’m not clear on how it uses subgroups in calculating the std dev from normality but it certainly must use them…. I think I’m answering my own question as I write this. Its not finding a local optimum…. its finding some optimum based on how it uses subgroups. This would indeed be a reason why the transformation would not work as well as it could and I could find a better solution for the individuals by simply trial and error.
    Next point that I’ve always wondered about is …. putting on my mathmatical hat…. a lambda = 0 really isn’t raising your data to the power of zero (that would be just 1) its making a log transformation. I could be wrong but as we approach zero…. is it really a smooth transition in std dev from normality at the point where we hit a log transform (if it is smooth transformation, why would it approach a log with a certain base). My math hat says no its not (but my math hat has some moth holes in it since in been in the closet a long time). I’m under the impression that a log transformation is simply a special case minitab tries. In other words as lamdba goes very small does it really approach a log? Again my math hat says no but are there any math guys out there really know this? If this is true the importance of it is that even if your confidence interval includes zero it wouldn’t necessary mean that the log transformation applies. I really like to be wrong since it means I learn something. Can anyone confirm this?
    Thanks, Jamie
     

    0
    #83849

    Jamie
    Participant

    My apologies, this post doesn’t make sense where it falls in the thread. Move it up several notches then it might make more sense. Its more a reply to the orignal post.

    0
    #83848

    Jamie
    Participant

    Let’s think about this from the customers eyes. I want customer sigma  sigma level, not process or product, I don’t care about this opportunity stuff. Meaning for each “thing” you try to do for me I want it done right. Now if you tell me you are Six-Sigma then as a customer I should only experience 3.4 failures per million “things you do or provide me”. In other words I’m led to believe that out of 1 millions services or products I expect it to be right 99.9997% of the time. Or at least that’s what I hear your advertising say (remember I trying to think like the customer). What else does a customer want, but to know how often you will produce a good product?
    So lets put this in perspective to a real example I’ve got (this is my true example). I bought 2 GE products (fairly close to top of the line) for my new house. A stove and hanging microwave. Now both of these quit working the first year I had them (it wasn’t an installation problem, the servicemen both confirmed that it was the product). I think its reasonable for a customer to expect that these products should work at least a year (of all the stoves and microwaves I owned that weren’t GE they all lasted much longer, actually I don’t think I ever had a non GE stove or micro wave break). If this is a reasonable expectation from a customer then I consider this to be 2 failures. What are the chances I’m 2 of the 3 failures that were going to happen out of a million? This coming from a company taughts six-sigma (again I’m thinking solely from a customers perspective).
    Now please don’t tell me I don’t understand what sigma level is, I’m not trying to explain sigma level here. I’m trying to explain what I think the average person hears from companies that make claims about six sigma.
    I think that the average person (that has heard something about six sigma) hears that a six sigma company should rarely if ever produce a bad product (if thats not what Six Sigma is about then what is it about?). So when a six sigma company doesn’t live up to these expectations it can “leave a bad taste in a consumers mouth”.
    My conclusion is that Six Sigma companies need to be very responsbile about how they market their six sigma efforts.
    Jamie
     

    0
    #83843

    Jamie
    Participant

    I too would like to know how the Anderson-Darling test statistic is calculated. I’ll tell you what I’ve learned and maybe someone can tell us whether this is correct…
    Anderson- Darling uses the squared difference from normaility, where normality is based on the normal cumulative density function for a mean and standard deviation from your dataset.
    Ryan-Joiner uses a statisitic based on the correlation to the data and the line created from a normal proability plot.
    My thoughts are that AD is less robust against large data sets and granularity while ryan joiner is less robust against outliers, but I don’t know for sure.
    These are my understanding, but that doesn’t mean they are correct. I’ve had a hard time finding much real information on how these tests are actually performed. Comments from anyone that knows whether or not these assumptions are correct would be greatly appreciated.
    Thanks, Jamie
     
     

    0
    #83840

    Jamie
    Participant

    >Also, there are some modifications to the Box-Cox transformation >method that you might need to use to normalize your data. 
    I’d be interested in what these are….I’m aware of failures when the max to min ratio of the data is less than 2. In this case you can subtract a constant close to the minimum value across all the data. This will yield a max to min ratio > 2 and sometimes provide for a transformation that will be normal. As I posted before this still only works if your original data is smooth and continuous.
    I also ran into a situtation where minitab found what it said whas the “best” lambda but even when graphed and tested this lambda was still skewed and not normal (using Anderson-Darling). But, by trial and error I was able to get a lambda that produced very normal looking data (by histogram) and one which passed AD. I did this only because the data looked like a set which should be able to be transformed (it was loading times so it was skewed from the natural boundary of 0). I’ve always wondered why minitab wasn’t able to get closer to the lambda I did.
    Jamie
     

    0
    #83771

    Jamie
    Participant

    Gabriel, I haven’t read all the posts in this thread, but I think you make a number of good points here. My thoughts… the only reason a customer would demand a Cpk>1.33 without specifying a Ppk is that one they don’t understand what they are asking for or two they can easily adjust for differences between lots of raw materials and only need consistency within a lot. So it might not matter if there is a lot of overall variation as long as there isn’t much within a batch of material they will use at any one time. I don’t have any real world examples but they might exist.
    Jamie

    0
    #83770

    Jamie
    Participant

    I think it depends on how you are defining percent… is it percent contribution, percent of study variation, or precision to tolerance. If its the third I think you are correct. If its the first I believe you would use (Sigma**2)measure/(Sigma**2)total, if its the second its 5.15*(Sigma)measure/5.15*(Sigma)total… notice the 5.15’s cancel.
    Does anyone know if in discussing gage variability if there is a standard (default) way of reporting it. We tend to use percent contribution, but I often prefer percent of study variation. P/T also has its use especially if your data range far exceeds your tolerance.
    Jamie
     

    0
    #83766

    Jamie
    Participant

    My recommendation is to start at a high level creating a “macro” process map. Treat each “trigger” as one step (assuming I’m understanding your meaning of trigger). This will keep the team from getting frustrated. Then come back and do seperate process maps (mirco level) for those triggers that are important or contain many details.
    Jamie

    0
    #83765

    Jamie
    Participant

    I think Bob M has pretty much nailed it, but I’d like to add that the criteria for finishing a project is usually specified in the Problem Statement/Objective which is one of the first things you will do in a project. If its not already provided. The problem statement should state explicity that the primary metric will be reduced from Y (based on historical performance) to X (based on data collected after project improvement) by a certain date. I also expect to see that this proven statistically. If your mean (assuming you have a mean problem) is better then target but because of variation you can’t prove it statisitcally, you aren’t finished.
    Jamie 

    0
    #83762

    Jamie
    Participant

    Not sure, but I think the problem is…. you don’t want to transform the difference. You want to transform the original mean and the target where (target – mean = difference). Now take the difference between the transformed mean and transformed target. Use this for power and sample size with the Std dev from the transformed data. See if this gives you a reasonable answer.
    Now I’m really just speculating but this would apply to a one sided t test so you might need to do a target on each side, transform them and take the differences. If you do it this way remember to use null hypothesis (“”) with alpha/2. I think this might yield 2 slightly different samples sizes depending on the skewness in the original data set (but not sure until I try an example). Use the larger if it does. Its late in the day so just shoot me if this is wrong or go into a rant about what are they teaching blackbelts these days:)
    Jamie

    0
    #83760

    Jamie
    Participant

    “1- Do you know if it is statiscally possible that certain veriable data can not pass normality test even if they are tranformed using Box-Cox method. “
    reply…
    Its certainly is possibly. There are many reasons that continuous data may not be normal …. multi-modes, skewness, kurtosis, granularity, and outliers. Box cox will only help with one of they five… skewness. If your data is not normal because of one of the other 4, doing a transformation will not help. The good news is that diagnosing the reason for nonnormallity is often as important to solving the problem as the actual data values.
    Hope this helps,
    Jamie
     
     
     
     
     
     

    0
    #83759

    Jamie
    Participant

    I usually summarize the data and enter it in this form. Doing it this way has always worked for me. But, you shouldnt need to summarize your data. I’d suggest summarizing it or posting an example of a few lines of data with the specific error message.
    Jamie

    0
    #83701

    Jamie
    Participant

    If the catapult is not possible another that I’ve seen suggested for process improvement involving making paper airplanes or simple paper helicopters. The objective for airplanes is either flight distance or time aloft (not sure if the helicopters use time aloft or falling straight to a target). I’ve not used these in a class, but seen them posted as suggestions in this forum several times. Supplies are certainly minimal for these (paper for airplanes and paper and a paper clip for helicopters). You might try a search.
    Just thinking out loud, but an intersting exercise might be (at the beginning then at the end of training) allow teams 1 minute to make planes (as many as they can). Give them a spec limit that the plane must fly for at least certain distance (or within an upper and lower distance) to be considered a good product. Charge them $1 per sheet of paper used and pay them $2 for each plane that is in spec (funny money of course). The team that makes the most “money” gets a prize.
    Jamie

    0
    #83699

    Jamie
    Participant

    We use the catapult as a project from beginning to end in our classes. We have found it to be a great “equalizer” since we teach Six Sigma to a rather diverse audience. After each section we have a breakout session where the catapult is used (it can’t be used for all sections, but it can be for most). So in our classes we will do a process map, XY matrix, FMEA, gage study, capability, hypo tesing, regression, DOE, mistake proofing, etc all on the catapult.
    We start by telling them all they have to do for now is try to figure out how to make a 60″ shot +- 2 inches. Then tell them by the end of the class they will be able to make any shot without a practice shot by using the tools we will teach. We then hold a competition between teams at the end of  Improve phase (I assume this is fairly standard for teaching DOE).
    Now to answer your question, yes regression can be applied to solve the problem if the teams can figure out how to get enough “power” into the catapult, but what you will find is that enough “power” to make the long shots will be too much power for the very short shots. In other words to hit a short shot they might only need to pull back a few degrees. This creates a large problem in the the gage for pull back is pretty poor (which it is) and even a degree over or under will cause them to miss a shot. A DOE will eliviate this so they can use a powerfull setting (2 rubber bands) for long shots and a less powerfull setting (1 rubber band) for short shots. Although our last blackbelt class did exceptionally well with using just regression to solve the problem so yes regression can work (but DOE is really just an extension of regression anyway).
    Hope this helps,
    Jamie
     
     

    0
    #82410

    Jamie
    Participant

    Muru, If I recall correctly, George Eckes uses a number of case studies in his book The Six Sigma Revolution. One of which involves Six Sigma within a hotel chain (I think it was Westin). You might want to take a look at it.
    Jamie
     

    0
    #82229

    Jamie
    Participant

    By one trial I assume you mean you will have multiple operators/inspectors measure (categorize) each sample only one time. You can indeed do this the only problem is you will not be able to determine “repeatable”. My assumption would be that cost is an issue if you don’t have opporators repeat themselves. If this is the case I’d do the study without repeating it and look at reproducibility. Generally speaking you will see more problem in reproducibility then you would with repeatability. This makes sense a single operator is more likely to be able to repeat his measure then 2 different operators reproduce each other. If you found a good gage with respect to reproducibility then you might conclude that the same would be true for repeatibility (assumptions can be dangerous). Its all a matter of cost though. If you find an unacceptable gage from only reproducibility then I’d say run another trial to determine the repeatability component. Having both of these can be very important in understanding how you might go about correcting the gage. If operators can not repeat themselves then chances are there is something wrong with the gage itself. If operators can repeat but not reproduce each other chances our its a procedural problem (ie operators are using the gage differently).
    Hope this helps,
    Jamie
     

    0
    #82147

    Jamie
    Participant

    I’m not aware of much work done on CI’s for sigma level (but I’m not the best person to know). I do know some posts to this group have provided a method to do so. Might try a search.
    The main reason I’m posting is to suggest that you consider research on the 1.5 sigma shift that we apply to long term data to estimate short term Sigma level. Every sigma lookup table (not z table) I’ve used adds 1.5, yet I’ve not seen any empirical evidence to support this. This would certainly be interesting to either study in simulation or derive true values by studying real processes. The lit study alone would be worthwhile. A hole host of questions are still left unaswered about 1.5. What are the conditions where 1.5 is appropraite and when should 1.5 be added? I’d argue that most data (not taken in true batches) is somewhere betweem short and long term. Does this mean we should be adding some number between 0 and 1.5? Is 1.5 the max we are likely to see between short and long term or is it some average? Is this number different for different industries/process type? Certainly other questions exist and I’ve yet to meet an MBB that can answer them (not to mean there aren’t readers of this group that can).
    Best of luck in your studies,
    Jamie
     

    0
    #81044

    Jamie
    Participant

    I didn’t see this posted in the other thread so I’ll add it here. The explanation I like is….
    You are using an estimate of the mean to calculate the squared divisions. Since this estimate was taken from the same data as the sample used to calculate the variance it will tend to vary less because the true mean is different then that which is calculated. Because of this the estimate of the variance looses a degree of freedom (hence n-1). The impact of dividing by n-1 vs n, increases the variance estimate slightly (depending on sample size).
    As sample size increases our confidence in the estimate of the mean increases and therefor the impact of n-1 becomes smaller.
    If you knew the true population mean and did not know the variance (strange situtation indeed) and you took a sample from the population, the correct way to estimate the variance would be to subtract Xi- Mu (not Xi- Xbar) sum, and divide by n (not n-1). You can divide by n since you are no longer using and estimate of Mu but instead are using the true value.
    Interesting,
    Jamie
    If someone disagrees with this please reply because its the one explanation I’ve been given that makes complete sense.
     

    0
    #80661

    Jamie
    Participant

    The difference is because a paired t-test tests the difference between the pairs of data. It basically takes the difference between each “pair” and tests whether this difference is significantly different then 0. You can do a paired t also by taking the difference yourself and do a one sample t vs the differences against 0. You’ll get the same result. So if you rearrange the pairs the differences between the pairs will also be different. Paired tests are only done when it “doesn’t make sense” to be able to rearrange the data. If you subjected 10 parts to a chemical and wanted to test the difference before and after exposure, its important that the before measurment for part 1 is matched with the after measurment from part 1. Its not proper to match the before from part 1 with the after of say part 7 (ie rearrange the data). If it makes sense that the data doesn’t have one and only one matching “pair” you probably should not be using a paired t-test.
    Hope this helps,
    Jamie
     
     
     

    0
    #80347

    Jamie
    Participant

    Convert 3.4 DPMO to percent (i.e. 3.4/1,000,000) and look up the right tail area in a Standard Normal Table (most tables don’t go this far, so you will probably need one asscociated with Six Sigma). You will get 4.5 standard deviations.
    Now add 1.5 to this and you get 6 Sigma. We add 1.5 because the general assumption for calculating sigma level is that you have long term data (it represents most of the systems variation) but want to estimate short term Sigma (i.e. what the customer is likely to see at any given time… from a batch or run or order). Past studies have shown that short term sigma level is general 1.5 Sigma better then long term.
    This is how 3.4 DPMO equates to Six Sigma.
    Jamie
     

    0
    #80065

    Jamie
    Participant

    I don’t think the question sounds ridiculous. Is a shortcoming not a disadvantage? I think the poster is asking for pro’s and con’s. Why would a company impliment a system with disadavantages?… because the advantages outweigh the disadavantages. Simple put the benefit exceeds the expense. This sounds like every business decision we make.
    I see some of the disadvantages (con’s, shortcomings, whatever you want to call them) are time, cost, and needed commitment. Six Sigma requires significant resources throughout a business unit to become successful. If you don’t allocate enough resources you can waste the effort you put in.
    Jamie
     

    0
    #79996

    Jamie
    Participant

    Also my apologies I just realized you are Marc with a “c” not Mark with a “k”.
    Jamie

    0
    #79995

    Jamie
    Participant

    Mark, Thankyou for the additional input. It was extremely helpful…particulary the questions about the process. As you stated, if the original poster has not found the answer to his question he needs to provide more information.
    Thanks,
    Jamie

    0
    #79971

    Jamie
    Participant

    “If the process is in control and capable, then continue running it. If it is out of control and/or incapable, take the appropriate measures to correct it.”
    So what tools do you give an operator to determine this in a production setting? The orginal poster appears to be using only an Xbar chart (or lets assume this). Do you recommend him do anything else to detect the problems you discuss.
    Jamie
     

    0
    #79952

    Jamie
    Participant

    Dave, That was extremely enlightening and an excellent suggestion.
    Thanks, Jamie

    0
    #79949

    Jamie
    Participant

    Grabriel, The original poster does not mention using a range chart, but asks what he should do if he has either an individual point out of the spec limit and/or an individual point beyond the control limit. My suggestion was simply to examine a range chart, for it could indeed show that the range is OOC (maybe not as you point out). If this is not true or investigating the range is not a proper thing to do, please help me understand why.
    Thanks,
    Jamie
     

    0
    #79901

    Jamie
    Participant

    It sure does because the tails of the normal curve go to infinite. Remember 6 standard deviations is 0 defects per million (not 3.4, thats quoted with a 1.5 shift). So I’d say again (see my previous post) that that you either need to increase sample size or declare that you have approached 6 sigma long term/7.5 sigma short term. Only other option I see is calculate sigma with only 1 defect, stating you are likely better than this. But is this just a numbers game, have you accomplished what you need to do?… 0 defects for three months (assuming an adequate sample size) sounds like a real victory. Time for a new project.
    Jamie

    0
    #79896

    Jamie
    Participant

    I just ran into this while making a spreadsheet to calculate sigma value based on DPMO. What happens when you enter in 0 defects…. well sigma level goes to infinite. What I did to handle this was assume that instead of infinity it approached 6 sigma long term or 7.5 sigma short term. Without an incredibly large sample size I don’t know of much else you can do. Though I think its kind of a mute point, if you truly arent seeing any defects after three months is there really a need to track it? (tracking data is a non-value added activity).
    Jamie

    0
    #79894

    Jamie
    Participant

    Scott, I agree with what you are saying, but I think you are not addressing what the original poster asked. I believe the situation is the point on the X-chart (the average of the individuals in the sample) is in control, but if he looks at the actual individual observations he sees one individual that is outside. If this is true then the difference should show up in the range chart. So I too recommend using both an Xbar and R, looking at both charts.
    Jamie

    0
    #79848

    Jamie
    Participant

    If you are measuring a continuous variable there is some point (a spec limit) that the measured value is either too big or too small to be an acceptable part. If you collect long term data and calculate 3.4 defects (3.4 parts that are beyong your spec limit) per million you can look this probability up in a standard normal table and it gives you a Z-score of 4.5. This is the number of standard deviations from the mean to the nearest spec limit. The standard is to report short term sigma level by adding 1.5 to this, hence 6 sigma in this example.
    This can be done for categorical data as well by simply calculating the ratio of units containing defects vs the total number of units. Look up this in a normal table to get a Z-score and add 1.5 to it (if you have collected long term data).
    Jamie
     

    0
    #79754

    Jamie
    Participant

    an idea…. define a defect as any question answered as somewhat or very dissatisfied and your opportunity count as the number of questions answered. This will give you a percent defective that you can look up in a normal table to get a Z-score (add 1.5? why not everyone else does??? note the sarcasim).
    Jamie
     

    0
    #79724

    Jamie
    Participant

    I’m in the “I don’t get it camp”. Hold someone to $1 million a year when they often can’t control what projects they get, what resources are available, or how much the organization supports them (not to mention uninterrupted time)? Thats 10 to 20 times the salary of the employee, while the rest of the organization doesn’t justify their salary alone. Sounds like we hold Six Sigma analysts to a different standard.
    Let’s say $200k for salary, benefits, admin, office, tools, etc per black-belt are your costs. I think $500k goal is a much more reasonable expectation…. and even that is probably higher return than any other aspect of an organization.
    Jamie
     

    0
    #79691

    Jamie
    Participant

    I believe the goal itself is somewhat arbitrary, but the process of setting the goal is not. Its simply a target to move towards. If we set goals perfectly, i.e. at the right level where the average project will get to the highest level possible then half of the time teams will not make that goal and half of the time they will (assuming project performance is normally distributed). So what do you do to increase sigma level of project performance…. most teams will set lower goals (ie increase the spec limit, its the easiest way to increase sigma level). I’m not a fan of public hangings for people that set aggressive goals, accomplish all that is possible yet do not meet them. I wouldn’t hang a person who set a goal of 65% reduction, yet did everything possible and delivered 59% and saved me $176,000. Would a person who set a goal at 40% and achieved 42% be better than the person who did 59% with a goal of 65% (given the same project)?
    The goal is a means to an ends not an ends to a means. But, yes someone that repeatededly doesn’t make goals should either improve the process used to set goals or should improve the process used to achieve results.
    Jamie
     

    0
    #79672

    Jamie
    Participant

    Pep, I’ve found goals to somewhat arbitrary. They are often most usefull as a communication tool. The direction is obvious but how much we can reasonable move in a project is not. I recommend discussing this with the team working on the project, the process owners, and the champion. A good starting point is reducing the defect percent by “50%”. This would be from 74.4% to 37.2%. I wouldn’t just use this but instead use 50% reduction as a starting point for dialog. Does the team feel like this is “unreasonably reasonable”, what do the process owners think about this as well as the champion. It doesn’t say that’s all you do if you find more opportunity but its at least a starting point. Look for how the team reacts to this goal… do they look at you as if you are insane? or do you get a response like that’s going to be hard, but possible? How do the champion and process owners react to a 50% reduction? Would the customer be elated with a change like this? Also consider the economic impact of the goal, does it bring about significant return? Use this as a starting point for bartering a reasonable goal. A goal is something everyone should be comfortable with.
    Jamie
     
     

    0
    #78961

    Jamie
    Participant

    Black Noise is Special Cause Variation and White Noise is Common Cause Variation.

    0
    #78884

    Jamie
    Participant

    The routes I’ve found are:
    1. Get certified by a respected company: GE, Allied Signal, Honeywell, Dupont, etc. As mentioned can be a catch 22, you may need to be certified to get employement to get certified by the company.
    2. Get certified by a respected consulting firm. The company links at the bottom of this page are well respected. All appear to be very good, but costly (I think 10k+ for blackbelt). I was certified by Six Sigma Qualtec.
    3. Take the ASQ examine. I haven’t found a less expensive way to get certified.
    One of the best ways would be multiple certifications. ie you could be certified by a consulting firm, by an employeer, and take the ASQ exam. Backing this up with project experience would seem to cover all the bases.
    Jamie
     

    0
    #78148

    Jamie
    Participant

    A tootsie roll is, as the picture shows, a chocolate, fudgy type of candy that is about 3 cm long.  They can get very gooey if the get warm. 
    The catapult DOE is is done using a little toy catapult that uses rubber bands for tension devices.  You can set up a DOE using such factors as: different type of projectiles, different size projectiles, different shaped projectiles, different rubber band thicknesses, different rubber band length, angle of the base of the catapult, and anything else you can think of.  The object of the game is to launch the projectile the longest distance. 
     

    0
    #78131

    Jamie
    Participant

    You can try a DOE for paper airplanes.  See what factors make them fly the farthest.  You can also do a DOE with a catapult.  People tend to enjoy anything that deals with flying objects.
     
     

    0
    #77848

    Jamie
    Participant

    Billybob,
    Sorry.  I guess its a Southern thing.  All of my family in Alabama and Kentucky who has the name Billy Bob or Ray Bob or Jim Bob or whatever is called that because of a combination of their names (William Robert, Raymond Robert, James Robert, etc). 
    My Apologies,
    Jamie

    0
    #77840

    Jamie
    Participant

    Even though each house is unique, you still basically do things in the same order.  You still put the wiring in before you hang the sheetrock and you hang and finish the sheetrock before you do the interior painting.  Yes the weather factor is uncontrollable but most contractors that I know have a weather factor built in.  An unusually rainy or snowy season can throw some chinks in this time line but under fairly normal conditions and proper planning, it shouldn’t be as big of an issue as it can be.  Process mapping is a great tool for contractors.  You map each step of the house building process.  You also allow for building code inspectors and any rework loops.  When you list your inputs and outputs at each step, you will start to see where your areas of opportunity exits.  Like Billy Bob said, maybe its your subcontractors or your suppliers who are not capable. 
    Six Sigma is driven by customer satisfaction and in house building, customer satisfaction is do or die.  Especially if you are building custom built homes or even spec house that there is no initial purchase contract on.  What could be worse than building a house to sale and then no one wanting to buy it because you either put more options than the average customer wants in a house its size and then it is priced out of its relative category or you put too few options in it and people think it is built cheap.  Trust me, I grew up in a family owned construction business, I’ve seen both sides of that coin occur.
    Personally, I think Six Sigma is perfect for your line of business. 
     

    0
    #77478

    Jamie
    Participant

    Great replies…It makes sense to define the failure categories then run an attribute gage R&R. My expectations are that my gage may fail as Ron has written…”This has proven every time I run it that the system in not capable of discriminating and involves educating the data entry personnel ofthe proper definitions.”
    But I have another larger concern about this gage…it is whether or not the complaint is ever logged. For example…last month was a steller month, shipping problems almost zero. People asking if we really need this team. When I asked probing questions i got “well we [sales group] were really busy and I don’t think we had time to log the problems”. Well this isn’t data, but it gives me a strong suspision I have a bad gage. As Mike Carnell commented that I can not make “fake” problems and see if they get logged since it invloves the customer. Would it make sense to just assume a bad gage, fix it and move on or is there some really creative way to measure how often a sales rep would actually log an incident?
    I’m still pondering Mike’s comment about moving the meaurement point back into the process. One place to possibly do this is inventory counts. We have some issues around inventory counting. I do plan to do a gage study on inventory counting… can we reproduce and repeat inventory counts is a big question. Our accountant thinks we introduce more mistakes into inventory by counting it then we correct.
    One thought that just came to me…. would it work if I had the shipping clerk maintain records of Stock Not Available and compare these to the forms that are generated over a months period. This would give me a percent that were filed compared to that which should have been filed. Then base my gage’s suitbility on that.
    Jamie
     
    Jamie
     
     

    0
    #77199

    Jamie
    Participant

    You asked about the advantages and disadvantages of doing a transformation.  From what I understand, you are saying that there is no rational subgroup.  How often are you taking your samples?  If you are taking them once a day or once a shift then you can rationally subgroup them into days or weeks.  Also, have you checked for autocorrelation?  If you are taking them at specific times during the day, you can  check for autocorrelation and possibly find a rational subgroup this way. 
    The problem with using a transformation such as a Box-Cox transformation is, the data is now normal but does the control chart make sense to the general work force.  It probably won’t.  If you chart time used for inspection or number of defects found, this will make sense to the local work force but when you transform it into log data or exponential data, all they are going to see is numbers that they can’t relate to their process. 
    Hope this helps.
    Jamie

    0
    #76810

    Jamie
    Participant

    One thing you can also do as a mock project is paper airplane flight.  You can make you key measure length of flight, # of loops the plane does, or something similar.  You can use the entire road map to do this project and it is not very lengthy.  Things to be included could be, process maps, fishbone diagrams, C&E matrix, FMEA, DOE, and a lot more.  You can cover the entire roadmap with this mock project. 
    Another possible project is “How to brew the perfect cup of coffee.”  This would be great if you wanted to focus on attribute data and subjective measurement systems.  This is another project that can make use of the entire roadmap.
     
     

    0
    #76492

    Jamie
    Participant

    Have you done a measurment system analysis?  You need to determine what percentage of your sampling error is due to variation within your measurment system.  Yes, this is necessary.  Otherwise you could be working with useless information.

    0
Viewing 100 posts - 1 through 100 (of 105 total)