iSixSigma

Tim Folkerts

Forum Replies Created

Forum Replies Created

Viewing 56 posts - 1 through 56 (of 56 total)
  • Author
    Posts
  • #103138

    Tim Folkerts
    Member

    V,
     
    I’d love to see you example, because (along with Gabriel), believe it is mathematically impossible.  The best you can hope to do is the set
    0 0 0 0 0 0 0 0 0 1
    (or other mathematically equivalent sets).
    mean = 0.1
    st dev = 0.3162
    Mean + 3 stdev = 1.05
    Thus all the data points are within 3 st dev of the mean.
     
    Tim

    0
    #102057

    Tim Folkerts
    Member

    a) First find the probabilty of drawing ggy in that order.
    The odds that the first is g is 5/15
    The odds that the second is g is 4/14
    The odds that the third is y is 7/13The total odds are (5/15)*(4/14)*(7/13) = 0.05128Now, you can draw them in three different orders (ggy, gyg, or ygg) so the odds of any of these three are
    0.0528*3 = 0.1538 = 15.38%b) would work the same way. Find the odds of drawing them in a particular order, then multiple by how many orders there would be (which should be 12 in this case).Tim F

    0
    #100364

    Tim Folkerts
    Member

    I’d like to refine a bit of what Bob J said. 
    1)  A 2-level factorial design is embedded in a CCD, but it can have any number of factors.  Most typically people use 3 or 4 factors.  (2 factors is almost too simple for RSM, while 5+ factors requires more experiments than most people are usually willing to run.)
    2)  CCD can be 3 levels, but typically it is 5 levels.  Furthermore, these levels are usually not usually evenly spaced, so that can create some difficulty in setting up the experiment.
     
    Tim F 

    0
    #100121

    Tim Folkerts
    Member

    > No… It would be the X bar of the sample.(sample mean).
     
    Of course, the sample mean is used as an estimate of the process mean, so in that sense X-bar could be called a measure of the process mean. 
     
    Tim F

    0
    #99868

    Tim Folkerts
    Member

    If I read the question correctly then the simple answer is “no!” 
    To find an overall average, you NEED to do a weighted average, where each value is weighted by the FTE at that site.  If you don’t know the FTE, you can’t find the answer.
    For an extreme example:
    Site 1:  2 employees; 1 is sick all week.  They report average time lost is 20 hr/week per person.
    Site 2: 98 employees, none are sick.  They report average time lost is 0 hr/week.
    The overall average time lost is certainly not (20+0)/2 = 10 hr per week.  It is really (20*2 + 0*98) / (2+98) = 0.4 hr per week per person. 
     
    Ask the sites for the complete information and then find the average!
     
    Tim F

    0
    #99791

    Tim Folkerts
    Member

    First of all, we need to assume some distribution, and the binomial would be appropriate for this circumstance (assuming the defects are indeed randomly distributed).  Using Excel, it is easy to generate a table of values with the odds of getting any number of defects.  Below I’ve pasted the results: column 1 = # of defects, column 2 = % defective, column 3 = odds for getting that number of defects or less when 0.8% are bad and you have 200 pieces. 

    0
    0.00%
               0.201

    1
    0.03%
               0.524

    2
    0.07%
               0.784

    3
    0.10%
               0.922

    4
    0.13%
               0.977

    5
    0.17%
               0.994

    6
    0.20%
               0.999

    7
    0.23%
               1.000

    8
    0.27%
               1.000

    9
    0.30%
               1.000

    10
    0.33%
               1.000
    This applies to a single box.
    For a set of 15 boxes, just raise the sample size to 15*200 = 3000.  Then you get

    0
    0.0%
               0.000

    5
    0.2%
               0.000

    10
    0.3%
               0.001

    15
    0.5%
               0.034

    20
    0.7%
               0.242

    25
    0.8%
               0.632

    30
    1.0%
               0.905

    35
    1.2%
               0.987

    40
    1.3%
               0.999

    45
    1.5%
               1.000

    50
    1.7%
               1.000

    55
    1.8%
               1.000

    60
    2.0%
               1.000
     
    Here, 90.5% of the shipments of 3000 pieces will have 30 (1%) or less total defects.  95% of the shipment will have 32 (1.07%) total defects.
    (Again, all of this is based on a binomial distribution.  If the the defects are not randomly distributed, then the numbers above may be suspect.) 
     
    Tim F

    0
    #99789

    Tim Folkerts
    Member

    You might check at
    http://www.research.att.com/~njas/oadir/
     
    They have an extensive list of orthogonal arrays, as well as references to other sources.
    The arrays are organized in several ways.  In one list they have all the possible arrays for any number of experiments from 2 to 100  (many of them show the full arrays – others just say they exist).  For example, the possible experiments with 92 trials include:
      92   1 (*)   2^91     92   2 (*)   2^13  23^1     92   3 (*)   2^2  46^1     92   4 (*)   4^1  23^1   So here they have an experiment with one factor that can take on 46 values and 2 that can take on 2 values.  That’s not quite 100, but it is a lot more than 9!
     
    Tim F

    0
    #99770

    Tim Folkerts
    Member

    Phil & TC:
     
    I’d add a caveat to the caveat:  “If you over-fit, you MAY get a model that fits very well within the range of the data you have.  However, you can also get a model that fits only right near the specific data points you have.  Extrapolating (estimating values outside the range of your data points) will almost certainly work poorly, but even interpolating (estimating values within the range of data you have) can be quite suspect – producing wild oscillations in the estimates between the points!”
     
    Tim F

    0
    #99677

    Tim Folkerts
    Member

    In a business setting, I would tend to say that a control chart for growth rate wouldn’t make sense.  To make a control chart, the first requirement is that the precess is “in control”.  For most companies, growth is anything but in control.  Every month it seems there is a new “special cause” of variation – a supplier can’t send a shipment on time; a customer needs a rush order; a salesman lands a big new customer; you release a new product; your competition releases a new product; ….
     
    Tim F

    0
    #99667

    Tim Folkerts
    Member

    OOPS!
     
    I got the numbers mixed around a bit.  I was half way between doing it with the built-in functions and doing it from scratch.  Faceman got it right!
     
    {The other possibility is it do     =A1*24*60*60 for the number of seconds since midnight}
     
    Tim F

    0
    #99665

    Tim Folkerts
    Member

    Steve’s method using text functions is one method.  You can also use time functions that are built in to Excel.  (The real hard-core mathematicians can always do it from scratch, but that might be a bit much). 
    =SECOND(A1) returns the seconds since the last minute for a cell formated for time.  Similarly for =MINUTE(A1), and  =HOUR(1)
     
    So the total number of seconds would be
    =HOURS(A1*3600) + MINUTES(A1*60) + SECONDS(A1)
     
    (Just be careful not to go from one day to the next, or you will have to modify this a bit)
     
     
    Tim F

    0
    #99573

    Tim Folkerts
    Member

    I find agricultural examples to be relatively intuitive, so let’s consider a 2 level factorial design.  I should warn you, it’s kind of long….
     
    Suppose you want to see what factors might affect how well plants in your garden grow.  You choose several factors:
    R: rototill the soil to loosen it
    F: fertilize the soil
    W: weed the garden regularly
    A: water in the AM
    P: water in the PM
     
     
    Before going too far, it is important to realize that you need at least 1 experiment for everything you want to know.  The first thing to know is just the overall average.  Then to check the importance of each of the 5 main effects, you would need 5 more experiments.  For each of the 10 two-way interactions (RF, RA, RW, RA, RP, FW, FA, FP, WA, WP, AP), you would need 10 more.  Then there would be higher interaction, but we’ll ignore those for this experiment.
     
    Due to time and space constraints, suppose we choose to run 8 experiments.  First, we’ll rototill 4 of the plots…
    1    R+
    2    R+
    3    R+
    4    R+
    5    R-
    6    R-
    7    R-
    8    R-
     
    From each half, choose half to fertilize.  From each half of those, choose half to weed (I hope the tables come out OK):
     
         R  F  W
    1    +  +  +
    2    +  +  –
    3    +  –  +
    4    +  –  –   
    5    –  +  +
    6    –  +  –
    7    –  –  +
    8    –  –  –
     
    For watering, we don’t want to repeat any of these choices.  If we did AM watering only for 1-4, and 1-4 produced more veggies, then we wouldn’t know whether rototilling or AM watering was the reason.  We say that R & A are confounded.
     
    One acceptable choice would be
         R  F  W  A  P
    1    +  +  +  +  +
    2    +  +  –  +  –
    3    +  –  +  –  +
    4    +  –  –  –  –
    5    –  +  +  –  –
    6    –  +  –  –  +
    7    –  –  +  +  –
    8    –  –  –  +  +
     
    Each of the 5 main effects has a different pattern.
     
     
    Now suppose we want to see if there is an interaction between R & F.  Perhaps doing one OR the other is good, but both or neither is bad.  The set of experiment with one OR the other is 3,4,5,6.  Unfortunately, that is exactly the same set that skips AM watering – they are confounded!  There is no way to know if 3,4,5,6 did well because we skipped watering in the morning, or because of the RF interaction.
     
    Now look at the AP interaction.  Look at the trials that are watered either AM OR PM, but not both: 2,3,6,7.  That’s not the same as any of the other main effects, so we could tell if watering once and only once per day is an advantage. (Actually, that is a slight fib.  Weeding OR Fertilizing (but not both) also happens for plots 2,3,6,7.  Thus if 2,3,6,7 do well, we would conclude that either [F OR W] or [A OR P] matters, but we don’t know which.)
     
    Then we could start looking for 3-way interactions….
     
     
    Tim F
     
     
     
     

    0
    #99382

    Tim Folkerts
    Member

    >Knowing simply that the response variable is airflow,
    >I’m curious how can you conclude that all three factors are signficiant
    >without collecting any data?Knowing simply that the response variable is airflow,
    >I’m curious how can you conclude that all three factors are signficiant
    >without collecting any data?Knowing simply that the response variable is airflow,
    >I’m curious how can you conclude that all three factors are signficiant
    >without collecting any data?I’m curious how can you conclude that all three factors are signficiant
    >without collecting any data?I’m curious how can you conclude that all three factors are signficiant
    >without collecting any data?without collecting any data?without collecting any data?OK, that was a bit of a leap. Still, I have two “hand-waving arguments”. First, Andrea probably wouldn’t have narrowed it down to these three if she didn’t expect them to be important. Secondly, the variations seem quite large. The area changes by 2x for factors 1 & 3, and even small changes in area tend to significantly change fluid flow rates. (I’ll admit that the effect of “length” was mostly a stab in the dark, especially without knowing exactly what “length” is being varied.)
    >my tendency would be to run a full factorial 3-factor designmy tendency would be to run a full factorial 3-factor designmy tendency would be to run a full factorial 3-factor designThat would also be a perfectly reasonable starting point. One advantage is that the FF is a subset of the CC design, so you could do another 6 experiments later to create a CC design.
    >Will a unit running with Diam1=1.7 and Length=4 and Diam2=2.8
    >have sufficient function to get a decent measurement? Will a unit running with Diam1=1.7 and Length=4 and Diam2=2.8
    >have sufficient function to get a decent measurement? Will a unit running with Diam1=1.7 and Length=4 and Diam2=2.8
    >have sufficient function to get a decent measurement? have sufficient function to get a decent measurement? have sufficient function to get a decent measurement? Good point. If this IS a concern, then a BB would be a good choice. None of the trials have all three factors at their most extreme values.
    Tim FP.S. This gets confusing when Tim is responding to Tim ;-)

    0
    #99160

    Tim Folkerts
    Member

    Have you considered using some pattern to the selection of cavities, rather than just random?With 128 cavities, I would guess you have a 8×16 grid (although that isn’t critical to what follows). I’m no expert on molding, but I expect at least some of the variation is due to where the mold is physically located. For example, the temperature may be lower toward the edges, or the plastic may flow in from one side toward the other. You ought to be able to do a response surface analysis of the molds based on location of the mold. This would be one step up from ANOVA. (I would probably still do the ANOVA first to see if the following is worth studying). Rather than just noting if there is a variation, you could attempt to find an equation to describe the variation across the chambers. This could give you additional insights. The experiments would still be the same – you just do a few more calculations to determine the role that position plays, in addition to the role of the other 9 vatiables. A random choice of cavities would still allow a reasonable response surface fit. A more systematic choice should provide a bit stronger statistical information.Of course, it is quite possible that the random variations within a cavity are bigger that any systematic variations.Tim F

    0
    #98996

    Tim Folkerts
    Member

    T.K.
     
    I set up a little table, but basically the equation is the binomial distribution:
    =BINOMDIST(0.095,30,0,TRUE)
    The first number is the defect rate, the second is the sample size, the third is the number of defects found.  The result here is 0.05006
    By trial & error, I adjusted the first number until the result was down to 0.05 (i.e. there is a 5% chance of getting 0 defects when you have 30 objects and 9.5% are bad).  Then I readjusted the 1st number until I got 0.5 for an answer, and finally a third time until I got 0.95% (i.e. there is a 95% chance of finding no defects when 0.23 % of the products bad.
    (Officially, it would be better to use the hypergeometric distribution.  The binomial assumes you are drawing from an infinite size lot, with a given defect rate.  The hypergeometric includes the fact that you have 300 to draw from in the lot.  For example, with 1 defect out of 300, you would have a defect rate of 0.33%, so you can have 0% defects, or 0.33% defects, but you can’t really have 0.23% defects in the lot!)
     
    Tim F
     

    0
    #98990

    Tim Folkerts
    Member

    T.K.
     
    It seems like a reasonable plan.  Of course, it depends on what level of quality you are trying to achieve.  A couple minutes playing with excel indicated that for each batch:
    * you have a 95% chance of catching a lot with 9.5% defects.
    * you have a 50% chance of catching a lot with 2.3% defects.
    * you have a 5% chance of catching a lot with 0.2% defects. 
     
    If 2-3% defects is aceptable in a given lot, then the plan is good.  If 10% defects is OK, then you are taking more samples than you need.  If, on the other hand, you would like to be fairly sure that all the lots are less than 1% defects, then you should sample more.
     
    Tim F
     
    P.S.  Presumably, the overall quality will be at least as good as the 2-3% stated above.  Lots with few defects get accepted, so they lower the average.  Lots with many defects usually get caught and sorted, so that only good products get through – again lowering the average.  It is only the occasional lot with 2-9% defects that gets through.

    0
    #98890

    Tim Folkerts
    Member

    Fernando,
     
    To me it would be valuable to know more about your factors.  As I see it (and please correct me if I’m wrong)… 
    You have 3 factors which you can control.
    *  Two choices of impellers which controls the flow rate.  We could call this factor I 9for impeller), and it can take on the value I = (1,2) 
    * Eight choices of “flow distortion”: D = (1,2, …8)
    * Four operating conditions: C = (1,2,3,4)
    You have one output, efficiency, that you are trying to control: call it E.
    You are looking for an equation which will predict the efficiency, so you need 9 coefficients (call them k(0) – k(8):
    E = k(0)   [constant term] 
         + k(1)*I +k(2)*D + k(3)*C    [linear terms]
         + k(4)*I*D + k(5)*I*C + k(6)*D*C   [interactions]
         + k(7)*D^2 + k(8)*C^2   [quadratic terms]
     
    First of all, the factors need to be rankable – and ideally uniformly spaced.  If the values can’t be set in a logical order then an equation like the one above is useless.
     
    For example if the four “conditions” are four different operators, then an equation like the one above is senseless.  You can determine who is the best operator, but you can’t logically say anything like “the efficiency is proportional to where the operator’s name falls in the alphabet”, which is what the regression analysis would be trying to tell you if you simply entered the names in alphabetical order.
    Slightly better would be, for example, a machine that has four speeds, but the speeds are not uniformly arranged – like 100 rpm, 150 rpm, 300 rpm, 350 rpm.  Better yet would be something like 100 rpm, 200 rpm, 300 rpm, 400rpm.  Best would be where you could dial in any setting you want, but it sounds like that isn’t possible for you.
     
    You are right – when you can’t control the settings, (or when the factors are catagorical rather than numerican) then standard forms like Box-Behnken & CCD are of little or no use.  Something like d-optimal would seem like your best alternative.  (Assuming, of course, that my understanding of the problem was correct to begin with!)
     
    Tim F

    0
    #98363

    Tim Folkerts
    Member

    Marty,Interesting. I have Minitab 14.1. I first ran it just using the Graph, Probabilty Plot, menu choice. It plots out the data with a fit to a distribution, along with some basic stats. Just now, I reran it with your method. The two results are indeed quite different. I also tried Stats, Basic Stats, Normality Test, and that gave the same results as my original Probabilty Plot method. I’m not sure why Minitab gives different answers to basically the same question.
    Tim F

    0
    #98319

    Tim Folkerts
    Member

    When I run the numbers in Minitab, I get much different results. First of all, a quick check using a histogram makes it look like the data set is not normal to begin with. There seems to be an unusual grouping near 0.035 and again near 0.045 The number I get are AD P
    Normal 0.366 0.413
    Weibull 0.341 0.480These numbers don’t seem to support either conclusion, although normal is slightly better for both tests. Tim F

    0
    #97889

    Tim Folkerts
    Member

    Personnaly, I think Stan & Dr. Steve both have valid point. They aren’t contradicting each other, merely looking at different aspects of the same problem.If I might paraphraphrase…vb: “I can make great parts within a given run, but there is a lot of variation between runs. How do I improve the overall cpk.”Stan: “Consistency is a problem. Fix that problem and your cpk will improve.”Dr. Steve: “The calculations themselves are a problem. Fix that, and your cpk will improve.”PHYSICALLY, the set-up between runs is a problems. If you could standardize the set-up procedure, or tune the process at the begining to match the center point of the specs, then the cpk calculated for the combined runs would approach the great values you get for the individual runs.STATISTICALLY, the grouping of data from different runs is a problem. Given the historical consistency within a run, there is no need to take lots of data. On the other hand, when you average together several runs, the overall cpk calculated from the standard equations is not good measure of quality. cpk is supposed to tell you something about how frequenly products are likely to be out of spec. With cpk = 5 within any given run, then the odds of a bad unit are vanishingly small. No matter how many different sets you mix together, there will still be a vanishingly small change of a bad unit. The EFFECTIVE cpk is still about 5. However, when you calculate cpk for the entire set, the value can be much smaller. The STANDARD cpk drops dramatically – to 1.4 in your case. There are still basically no bad units, but the calculations make it look like the situation is worse that it is. So listen to both Steve & Stan, because they both have something valuable to say!
    Tim F
    CQE, CRE, and (I almost hate to admit it in this thread) PhD

    0
    #97829

    Tim Folkerts
    Member

    Suppose you were going to Las Vegas to play the new “Black Belt Slot Machine”.  You know that the machines are designed to produce a payout with a normal distribution and standard deviation of $5, but the average for each machine may be different.  It costs $100 to play.  You play once and you get back $98. 
     
    Should you quit your machine and go play another machine?  Is this one going to lose you money in the long run?  Of course, with a single trial you can’t really be sure. So you keep playing.
     
    Suppose you play 5 times and you get back an average of $107.  Is this a good machine?  It might be luck, but (from JM’s table)  there is an 80% chance that this machine really does return more than an average of $100.  This looks like a good machine to keep playing.
     
    But suppose you play 5 times and you win an average of  $102.  Now it is much harder to decide if this is really a good machine.  That small of a difference is much easier to chalk up to a little bit of luck.  You could try another machine, or you could keep playing this one to see how good it really is.  So you keep playing.  If you get up to 50 trials (again based on the JM’s table), then you are 80% sure this is a good machine.  Sometimes you will lose money, but on average you can expect to come out ahead. 
    But let’s consider one other factor.  Suppose that once you choose a machine, you commit to playing 10,000 more times (equivalent to accepting the rest of the lot).  That’s $1,000,000 you are playing with!  Perhaps 80% sure isn’t good enough for you.  Your first trial – 5 trials with an average of $107 – is only 80% sure of being an above average machine.  There is still a 20% chance that you will come out behind!  If you want to be more confident, the solution is to do more trials to get a better power for the test.
     
    Tim F
     
    P.S.  The decline in the power is artificial.  You specified that you wanted a power of at least 80%.  For a difference of 7, the power for 4 trials isn’t enought, but 5 trials is.  Going from 4 to 5 is a big change: 4 is likely to be significantly below 80%; 5 will likely be significantly above 80%.  For a difference of 1, the power for 196 trials isn’t enought, but 197 trials is.  Going from 196 to 197 is a small change: 196 will be only slightly below 80%; 197 will be only slightly above 80%.
     
    P.P.S  Probabilty and statistics were originally developed in order to analyze gambling! 

    0
    #97740

    Tim Folkerts
    Member

    Gabriel,
     
    GOOD POST.  I think we pretty much agree, we’re just coming at it from opposite directions.  Basically, sampling is a poor way to ensure quality.
    As you point out I was analyzing various plans keeping AQL fixed.  It could certainly be argued that keeping RQL fixed is a better idea.  I chose AQL primarily because that is more familiar for most people.  (Ideally, AQL=RQL, so any lot better than a certain level is accepted, and any lot worse than that same level is rejected, but that basically requires 100% sampling.)
     
    Also, you make a good point – when the quality is expected to be very good (say, 0.1% or better) then c=0 is a good plan because you don’t expect any defects.  For items of lesser quality (say 1% or worse), then defects are to be expected, and c=0 forces a sampling plan with too few samples to have a real feel for the quality of the sample. 
     
    Put another way.  c=0 is a “black and white” sampling plan which effective when you expect “black and white” (i.e. all good, no bad) results.  c=0 is poor when you expect “gray” (i.e. mostly good, but some bad) results.  I was focusing on “gray”; you were focusing on “black and white”.
     
    Tim F

    0
    #97675

    Tim Folkerts
    Member

    PB,
     
    First of all, I certainly agree with Pappas that good control to begin with is better than sampling as a means of quality controlSampling ought to be a crude check to see that things are roughly on track, not a means of accurately analyzing a product.
    Second, let me try the table again.  Somehow the formatting alwys seems to come out poorly.  The three numbers are c (the accept #), n (the sample size), and RQL (the level of defects will get rejected).  All of these are for AQL = 1%, alpha=beta = 0.05 
     
    c   n   RQL
    0   5   45%
    1   38   12%
    2   82   7.5%
    3   137   5.5%.
     
    As for choosing among these sampling plans, the challenge is to decide what level of defects you are comfortable with.  I could make a sampling plan to acheive any level of confidence you wanted. 
    Consider the plans above.  All of them are 95% likely to accept a lot with 1% defects, so they all would accept the same number of “good” lots.  The question becomes “how bad do the lots have to be before I am (95%) sure to reject them.  For the c = 0,1,2,3, plans above, this translates to 45%, 12%, 7.5%, and 5.5% respectively for how bad a “bad” lot would need to be.  Obviously, the bigger the sample, the better you are at detecting bad lots.  With the c=0 plan, you would be accepting many of the lots with 10% defects, but with the c=3 plan, you would accept virtually no lots with 10% defects. 
    If you go to a large sample size, but keep c=0, then you are implicitly stating that 1% defects is not acceptable.  I’m too tired to do the numbers now, but if you draw 100 samples and even 0.1% are bad, there is a pretty good chance (perhaps 10%) of getting 1 defect and rejecting the lot.  If that is the case, then you redo the table above to see what kind of sampling you do the get the quality you are looking for.
    One other option if you want to reject lots quickly is to use multiple sampling.  For example, if you got 2 rejects out of the first 5, there is little reason to continue to 137 samples.  You already know the sample is worse than 1%.
    Tim F

    0
    #97671

    Tim Folkerts
    Member

    PB,You have to be careful. c=0 plans don’t particularly improve the situation – in fact the opposite is ususally true.The problem is that c=0 palns are usually still based on the consumer risk. Suppose you want only 5% (95% confidence) of shipments that are truly at 1% (AQL = 1) defective to be rejected. If you look at even 5 items, you have about a 5% chance of getting a defect, so with a c=0 plan, you would set the sample size at 5. The RQL (how bad it has to be to reject 95% of lots) is an astounding 45%!SUMMARY: For alpha = beta = 0.05, AQL = 1%, I getc n RQL
    0 5 45%
    1 38 12%
    2 82 7.5%
    3 137 5.5%The more discrimination you want, the more samples you should take and the HIGHER you should set your acceptance level. (Of course, that drives up cost, so you need a balance.)Personnaly, I view c=0 inspection as the “ostrich with its head in the sand” approach to sampling. You don’t want to see any defects, so you bury your head in the sand and don’t really look for the defects!
    Tim F

    0
    #97482

    Tim Folkerts
    Member

    Michael,I don’t know the “official answer, but two answers jump to mind. First create the four factor design. then either:
    1) do it twice, once with each value of the final factor.
    2) divide the runs in half randomly and do each half at one of the two values for the final factor.Tim F

    0
    #97321

    Tim Folkerts
    Member

    Bill,
    On my computer is is “EQNEDT32” in “C:Program FilesCommon FilesMicrosoft SharedEquation”.  Look there or try a search for “EQNEDT”
     
    Tim F

    0
    #97319

    Tim Folkerts
    Member

    I agree that too much distracting material is a bad thing in presentations.
     
    However, if you do have a need to create equations, there are a few options. 

    The simplest is to use the “Symbol” font, which should be included on pretty much any PC or Mac.  It has all the greek letters, plus some other math symbols.  From Word or PPT,  try the Insert, Symbol menu choice.  Throw in some subscripts and superscripts, and you have a pretty good looking equation.
    There is a program called Equation Editor available on most PC’s.  From within Word or PPT you can find it using Insert, Object, Microsoft Equation.  It takes a little getting used to, but you can create just about any equation you’ve seen printed in a book.  (Personally, I like running the program as a stand-alone, then cutting and pasting into other documents.  This gives a few more options.)
     
    Tim F

    0
    #97216

    Tim Folkerts
    Member

    There isn’t a whole lot of difference between the two.  Some general considerations:
    Box-Behnken: 

    requires fewer runs (for 3 or 4 factors).
    only three levels required
    avoids the “corners” which can be good if extreme variations are to be avoided.
    limited ability to block the trials
    Central Composite:

    Can be run with 3 or 5 levels. 
    Can be built up from a previously run factorial design.
    Three level designs (“face centered” designs) provide a poor ability to determine quadratic coefficients.
    Some designs require points “outside” the original ranges of the factors, which can be a safety problem.
    Either design can estimate interactions between the factors.
     
    Hope that helps a little bit. 

    0
    #97122

    Tim Folkerts
    Member

    You could just do trial and error until you find the right value.  Or you could do some more algebra and get a general solution.
     
    I gave it whirl and ended up with the equation
      n = (Z^2) p(1-p) / (alpha)^2
    where alpha = 1-(confidence level), p = probability of success, and Z is the value from the standard normal table for alpha/2.  This gives the number of trials needed to be (1-alpha) percent sure that you will get a result of p +/- alpha when doing n trials 
     
    SPECIFICALLY:
    Given:  p = 0.8, alpha = 1-0.95 = 0.05
    Look up: Z(0.025) = 1.96
    Calculate: n = (1.96)^2 * 0.8 * (1-0.8) / (0.05^2) = 246
    Interpret:  If you do 246 trials and 80% of all the population would truly answer “yes”, then there is a 95% probability that you will get between 75% and 85% “yes” responses.
     
    The worst case is at p = 0.5, when you need to do 385 trials.
     
    Tim F
     
    P.S. It is possible to use two different alphas.  Currently, the equations assume 95% confidence that 80% is accurate within +/- 5%.  There are other possibilities: you could be 95% sure the answer is within +/- 2%; or 99% sure it is +/- 5%;  etc.  The  equation would still work, but there are two different “alpha” values to use. 
     

    0
    #97077

    Tim Folkerts
    Member

    A couple of thoughts…1) The mean of your data is 27.44, so you could always calculate the chi-square value yourself (or do it in Excel): (Observed-Expected)^2/Expected. I get chi-square = 75, which even for 17 degrees of freedom is way off the charts. The machines don’t seem to be consistent.2) I would expect the number of errors to follow a Poisson distribution. However, this means that you would expect (st dev)^2 = mean. Since 11^2 >> 27.44, there is much more variation than expected from a Poisson distribution. > 27.44, there is much more variation than expected from a Poisson distribution. > 27.44, there is much more variation than expected from a Poisson distribution. 27.44, there is much more variation than expected from a Poisson distribution. 27.44, there is much more variation than expected from a Poisson distribution. Tim F

    0
    #97076

    Tim Folkerts
    Member

    You need two more pieces of information:
    1) Approximately what fraction will be “yes”
    2) How sure do you want to be?As long as the sample size

    0
    #96788

    Tim Folkerts
    Member

    Suppose you throw 6 pieces of paper into a hat – 3 labeled “B” and 3 labeled “C”. If you draw them out randomly, what are the odds that you will draw out the 3 B’s first?On the 1st draw, there is a 3/6 chance it is B
    On the 2nd draw, there is a 2/5 chance it is B
    On the 3rd draw, there is a 1/4 chance it is BThe odds that the first three are B (assuming it is random) are
    (3/6) * (2/5) * (1/4) = 1/20 = 5%If the B’s and C’s are the same, there is a 5% chance of getting all of the B’s first. Thus if you really do get the three B’s first, you might reasonably assume that there was some reason for the B’s to come out first.Same thing if you have two processes. If the two processes are identical, the odds that the 3 from the B’s procesess are better (by whatever your metric is) is 5%. So if the B’s DO come out ranked 1-2-3, then presumably the B’s really ARE better.Tim F

    0
    #96227

    Tim Folkerts
    Member

    Of course, since you have 3 factors x three levels = 27 possible trials, you can’t have exactly a 1/2 factorial design = 13.5 trials  ;-)
     
    One slightly roundabout way is to start by creating a general full factorial design for however many factors and levels that you have.  (27 trials in your case).  Then use this to create a custom response surface design.  Once you have this design, then modify the response surface design using the D-Optimal option.  You can pick any number of trials, as long as you have at least as meany trials as parameters you are trying to fit.  
     
    This may not produce exactly the same results as standard fractional factorial designs, but they should be fairly balanced and efficient. Three level designs also fit well into a Plackette- Burman or Central Composite Design.  (You have to be a little careful interpretting the results if the factors are attribute (discrete) instead of variable (continuous)).
     
    Another option would be to look through the list of Taguchi designs to see if any of them fit the number of factors & levels in your experiment.
     
    Tim F
     

    0
    #96204

    Tim Folkerts
    Member

    One solution is to find a way to switch from discrete attribute data (e.g. pass/fail) to continuous variable data (e.g. length). For example, if “pass” means a certain range of diameters, then measure the diameters directly. Then you can try to optimize the mean and standard deviation.Another quick thing to check is which 8 produced a lot of failures. If both replicates at 4 settings failed, then there seems to be a strong pattern. If the failures occurred more or less at random, then you might well have some additional, uncontrolled factor at play.Tim

    0
    #96203

    Tim Folkerts
    Member

    Sidharta,Zeroing in on an efficient design is, of course, the power of DOE. Rather than changing one factor at a time, or exploring at random, or simply trying every possible combination, DOE tries to find an effective way to choose the experiments that are needed to discover what you want to know. This choice of experiments is related to a large number of considerations. Some that come to mind:
    how many factors do you want to study?
    are interactions of interest?
    is a linear fit good enough?
    how expensive are the experiments?
    how much do you stand to gain?
    how easily can you control all the factors?
    how time-consuming are the experiment?
    how knowledgable are the experimenters?
    how knowledgable are the analysts?For example, if you want anything more complicated than a linear fit, you need to run more than 2 levels. Or if you have 4 suppiers, then you need to run a 4 level experiment. It might be that a Taguchi design is most efficient, but it he analysts don’t understand them, you would be better off with a fractional factorial design requires a few more tirlas but is well-understood.As with most things in life, there isn’t a black-and-white answer. One size does not fit all.
    Tim

    0
    #96189

    Tim Folkerts
    Member

    The function that needs to be integrated (called the probability density function) is rather ugly. It is P(x) = [ 1 / {s (2 pi)^0.5}] e^-[ (x-m)^2 / 2s^2 ]where m = mean, s = st dev.For the “standard” form with m=0, s=1, this is P(x) = [ 1 / (2 pi)^0.5 ] e^-[ x^2/2 ]This is notoriously difficult to integrate, which is why everyone uses tables or computers. For example, in Excel you can use =NORMDIST(-3,0,1,TRUE) to integrate from (-infinity) to (-3) when the mean is 0 and the stdev is 1. Similarly, =NORMDIST(3,0,1,TRUE) goes from (-infinity) to (+3). The difference between these two is the integral from -3 to +3.Tim

    0
    #96188

    Tim Folkerts
    Member

    I would start by going back to the basic stats. You have a binimial distribution. Excel can quickly generate a table of the odds of getting 0 failures out of 6 trials with varying quality of parts. For just six trials, it turns out that products with 99% pass rate will still produce 1 failure 5% of the time. At the other end, products with a 60% pass rate will produce no failures 5% of the time. With 0 failures, you can be pretty sure it is at least 60% good. With 1 failure, you can be pretty sure it between 42% and 99% good, with a most likely value of 83% good.That doesn’t seem like a very precise test. I suppose that if you already know it should be considerably better than 99% good and you just want to test for gross failures of the process, then it would be a useful test. Tim F

    0
    #96125

    Tim Folkerts
    Member

    Sidhartha & I were answering slightly different questions, and it’s not certain which one was originally intended.I was answering the theoretical minimum # of trials to get the information requested, which is literally what was asked. 5 trials will give you enough information to determine main effects for 4 factors (6 trials if you want a quadratic fit to the 3-level factor).Sidhartha addressed the more practical question of what standard designs could be applied and how to rule out interactions (instead of simply ignoring them). A full facorial design is always an option when time and moey don’t make it prohibitive. Since you have a 3-level factor, then the common designs for 2-level factors don’t really work and the equation 2^n isn’t right. A full factorial design would require not (2^4) = 16 trials, but (2^3) * (3^1) = 24 trials. Minitab offers this option, but it doesn’t do fractional factorial designs for any situation other than 2-level factors.If you want to use standard designs, you could also try a Taguchi design, but the best Minitab has to suggest is L36 (with 36 trials).You can also use some sort of optimization (e.g. D-Optimal) to select a smaller set of trials.
    Tim

    0
    #96093

    Tim Folkerts
    Member

    JSB,Each coefficient you are trying to determine requires (at least) one trial. A single trial allows you to estimate only one coefficient – the mean. Each additional trial allows you to estimate how the factors actually affect the outcome. If you are going for a linear model with no interactions, then you have just four more coefficients to determine – the effect of each of the four factors. Five trials is the minimum that would allow an estimate of each of these coefficients.Since one of your facors has three levels, you could try a quadatic fit for the last factor. You don’t have to, but it would be a logical option and would still ignore interactions. The quadratic requires one more coefficient (for the “x^2” term), so now you would need six trials.Just as good practice, I would further suggest repeating at least one of the trials to get an estimate of one more parameter – the repeatibility.
    Tim.

    0
    #96046

    Tim Folkerts
    Member

    The probabilities for a normal distribution are actually a little worse than Robert suggested
    For +- 3 st dev, odds are indeed 0.0027.
    For +-2 st dev, odds of failure are 0.0455 (which is often rounded to 0.05) – 0.0027 (you can throw out cases where you are beyond +-3 st dev because the first rule already catches these) = 0.042.  The odds of 2 out of three failing are
      (0.042 x 0.042) x (1-0.042) x 3 = 0.0052
      (2 failures) x (one success) x (three locations for the succes)
    For +-1 st dev, 0.317 are outside the limits.  Odds of 4 out of 5 outside the limits are 
     
      At one standard deviation you are looking at 67% of all of the data therefore the chance of a data point being in excess of that is .33.
    .33x.33x.33 = .036
    and
    .33x.33x.33x.33 = .012
    so the rule 4 out of 5 in excess of 1 standard deviation is a conservative match to the first two rules :
    1 data point outside of 3 standard deviations
    2 out of three data points outside of 2 standard deviations
    4 out of 5 data points outside of 1 standard deviation.

    0
    #95888

    Tim Folkerts
    Member

    It took me a bit to work through it too, but I like Statman’s answer. I hope he will agree with the following explanation. He is looking for the mathematically most efficient method to see if there is any improvement. He is doing the same type of cacluation I was, but quitting as soo as possible to avoid the expense of extra trials. Basically it comes down to 1) Test a single unit
    2) Are you sure there is an improvement?
    If yes, then stop testing.
    3) Are you sure there is little or no improvement?
    If yes, then stop.
    4) Otherwise, go back to (1)You need to decide how much improvement you are looking for and how sure you want to be. Statman chose to look at the case where you are 95% certain there is some change, but you are not necessarily sure how much change.Since you seemed to understand my table, let me post a variation. Again, the number across is the number of trials, and the number down is the number of failures, and the odds of any given unit failing are 20%. 13 14 21 22 29 30
    0 5.5% 4.4% 0.9% 0.7% 0.2% 0.1%
    1 23.4% 19.8% 5.8% 4.8% 1.3% 1.1%
    2 50.2% 44.8% 17.9% 15.4% 5.2% 4.4%
    3 74.7% 69.8% 37.0% 33.2% 14.0% 12.3%
    4 90.1% 87.0% 58.6% 54.3% 28.4% 25.5%
    5 97.0% 95.6% 76.9% 73.3% 46.3% 42.8%
    6 99.3% 98.8% 89.1% 86.7% 64.3% 60.7%
    7 99.9% 99.8% 95.7% 94.4% 79.0% 76.1%
    8 100.0% 100.0% 98.6% 98.0% 89.2% 87.1%
    9 100.0% 100.0% 99.6% 99.4% 95.1% 93.9%
    10 100.0% 100.0% 99.9% 99.8% 98.0% 97.4%
    If you run 13 trial and get 0 failures, there is still a 5.5% chance the process still has 20% failure rate and that getting 0 was due to dumb luck. Thus you are only 94.5% sure there is an improvement. Anything less than 13 trials, even with no failures is not enough.BUT, if you run one more and still have no failures, then there is only a 4.4% chance of doing that well by dumb luck. Hence you are pretty sure there is some improvement. If you get here, you can quit and be (pretty) sure there was an improvement. But suppose you got a failure in these 14 runs. Getting 1 (or less) failures would occur 19.8% of the time, so now you have to continue doing more trials. At 21 trials and 1 failure, there is still a 5.8% chance that is is dumb luck, so you continue. But with 22 trials and only 1 failure, then you are down to 4.8% and you can quit.The “beta” colums are to decide when you can be sure the processes has definitely not improved to 10% failure rate.
    SummaryPROS:
    most efficient CONS:
    requires good grasp of stats to develop the procedure.
    requires a well-trained technician to follow the procedure.
    Tim

    0
    #95870

    Tim Folkerts
    Member

    Lets look at some actual calculations.  With your numbers – 15 trials, 20% failure – the normal approximation doesn’t really work well (we could run those numbers just to see how bad it is).  You can use Excel to calculate some tables using the true binonial distribution.  For example, with p = 0.2 = probabilty of failure, we can create a table with # of trials along the top and # of failures along the side.  The numbers in the table are the odds of gatting that many or less failures.  The bold number correspond to the mean. 

    15
    20
    30
    50
    100

    0
    4%
    1%
    0%
    0%
    0%

    1
    17%
    7%
    1%
    0%
    0%

    2
    40%
    21%
    4%
    0%
    0%

    3
    65%
    41%
    12%
    1%
    0%

    4
    84%
    63%
    26%
    2%
    0%

    5
    94%
    80%
    43%
    5%
    0%

    6
    98%
    91%
    61%
    10%
    0%

    7
    100%
    97%
    76%
    19%
    0%

    8
    100%
    99%
    87%
    31%
    0%

    9
    100%
    100%
    94%
    44%
    0%

    10
    100%
    100%
    97%
    58%
    1%

    11
    100%
    100%
    99%
    71%
    1%

    12
    100%
    100%
    100%
    81%
    3%

    13
    100%
    100%
    100%
    89%
    5%

    14
    100%
    100%
    100%
    94%
    8%

    15
    100%
    100%
    100%
    97%
    13%

    16

    100%
    100%
    99%
    19%

    17

    100%
    100%
    99%
    27%

    18

    100%
    100%
    100%
    36%

    19

    100%
    100%
    100%
    46%

    20

    100%
    100%
    100%
    56%
     
    So if you did 15 trials, then the odds of getting 0 defects are 4%.  Or put the other way around, if you see 0% defects, you can be fairly sure it came from a batch with less that 20% failures and you have a real improvement.
     
    If you go to 30 trials, then the odds of getting 2 defects are again 4%.   Now, even if you see 2/30 = 6.7% observed defects, you can be pretty sure it is a real improvement.
     
    Go all the way to 100 trials, and the odds of getting 12 defects is about 4%.  Now, even with 12/100 = 12% observed defects, you can be pretty sure it is a real improvement.  With 9% observed defects, you can be basically 100% sure it is a real improvement.
     
    Ultimately, you have to decide how sure you want to be and how big of an effect you want to observe.  Try different numbers of trials in the spreadsheet to see the effect.  Balance this against the cost of the testing and go for it. 
     
    Tim

    0
    #95865

    Tim Folkerts
    Member

    Is there a reason you want to use Excel?  It seems like an inefficient software solution for Fishbone Diagrams.  There are a variety of applications that will create professional, multilayer diagrams that cost $20 – $100.  If this is something you do regularly, it seems it would be well worth the investment.
     
    Tim 

    0
    #95683

    Tim Folkerts
    Member

    I found a fairly technical answer from Wolfram Research (makers of Mathematica software) at http://mathworld.wolfram.com/GammaDistribution.html:  It is pasted below, but I’m not sure how it will come out, since it has a lot of embedded equations.
    As I understand it:
    * the Exponential Distribution predicts how long until some event occurs (the wait time)..
    * the Erlang Distribution predicts how long until “h” separate events occur.
    *  the Gamma Distribution is a more generalized form of Erlang, where you could predict how long until event “3.423” occurs.  (For “real” events, it makes no sense to talk about non-integer values, but in an abstract mathematical sense, you can still do the calculation.)
    Thus Gamma. is the most general equation for arbitrary numbers of events.  Erlang is the specific form for an integer number of events.  Exponential is the specific form for a single event.
     
    Tim
     
    Here’s the web page (I hope):
    Given a Poisson distribution with a rate of change , the distribution function D(x) giving the waiting times until the hth Poisson event is

    (1)
    for , where is a complete gamma function, and an incomplete gamma function. With h explicitly an integer, this distribution is known as the Erlang distribution, and has probability function

    (2)
    It is closely related to the gamma distribution, which is obtained by letting (not necessarily an integer) and defining . When h = 1, it simplifies to the exponential distribution.

    Exponential Distribution, Gamma Distribution, Poisson Distribution
     
     

    0
    #95527

    Tim Folkerts
    Member

    I should add one important fact.  It is true that if you have a Poisson distribution, then the mean and variance will be (at least approximately) equal.
    HOWEVER, it is not true that if the mean and variance are equal, then it must be a Poisson distribution.  If you have small defect rates, then the binimial distribution will also have the mean and variance approximately equal.  It is also quite possible for a normal distribution to have mean and variance equal.
    To really know, you must look at the shape as well.
     
    Tim 

    0
    #95508

    Tim Folkerts
    Member

    It is often difficult to “prove” things when random variation is present, but there are a couple things you could do to show that the numbers are consistent with a Poisson distribution. 
    First, for Poisson distribution, the mean and the variance should be approximately equal, so check these two numbers for your distribution.
    Once you know the mean, you could calculate the expected Poisson distribution and compare it graphically to your data. 
    I also found a web page at http://csssrvr.entnem.ufl.edu/~walker/6203/L1hpoiss.pdf which shows how to test how well data agrees with a Poisson distribution.
     
    Tim
     
     that something For a Poisson distribution, the

    0
    #95290

    Tim Folkerts
    Member

    For those who haven’t seen this before, it is worth considering.
     “If I ran my business the way you people operate your schools, I wouldn’t be in business very long!”
    I stood before an auditorium filled with outraged teachers who were becoming angrier by the minute. My speech had entirely consumed their precious 90 minutes of inservice. Their initial icy glares had turned to restless agitation. You could cut the hostility with a knife.
    I represented a group of business people dedicated to improving public schools. I was an executive at an ice cream company that became famous in the middle1980s when People Magazine chose our blueberry as the “Best Ice Cream in America.”
    I was convinced of two things. First, public schools needed to change; they were archaic selecting and sorting mechanisms designed for the industrial age and out of step with the needs of our emerging “knowledge society”. Second, educators were a major part of the problem: they resisted change, hunkered down in their feathered nests, protected by tenure and shielded by a bureaucratic monopoly. They needed to look to business. We knew how to produce quality. Zero defects! TQM! Continuous improvement!
    In retrospect, the speech was perfectly balanced – equal parts ignorance and arrogance. As soon as I finished, a woman’s hand shot up. She appeared polite, pleasant. She was, in fact, a razor-edged, veteran, high school English teacher who had been waiting to unload.
    She began quietly. “We are told, sir, that you manage a company that makes good ice cream.”
    I smugly replied, “Best ice cream in America, Ma’am.”
    “How nice,” she said. “Is it rich and smooth?”
    “Sixteen percent butterfat,” I crowed.
    “Premium ingredients?” she inquired.
    “Super-premium! Nothing but Triple A.” I was on a roll. I never saw the next line coming.
    “Mr. Vollmer,” she said, leaning forward with a wicked eyebrow raised to the sky, “when you are standing on your receiving dock and you see an inferior shipment of blueberries arrive, what do you do?”
    In the silence of that room, I could hear the trap snap. I knew I was dead meat, but I wasn’t going to lie.
    “I send them back.”
    “That’s right!” she barked, “and we can never send back our blueberries. We take them big, small, rich, poor, gifted, exceptional, abused, frightened, confident, homeless, rude, and brilliant. We take them with ADHD, junior rheumatoid arthritis, and English as their second language. We take them all! Every one! And that, Mr. Vollmer, is why it’s not a business. It’s school!”
    In an explosion, all 290 teachers, principals, bus drivers, aides, custodians and secretaries jumped to their feet and yelled, “Yeah! Blueberries! Blueberries!”
    And so began my long transformation.
    Since then, I have visited hundreds of schools. I have learned that a school is not a business. Schools are unable to control the quality of their raw material, they are dependent upon the vagaries of politics for a reliable revenue stream, and they are constantly mauled by a howling horde of disparate, competing customer groups that would send the best CEO screaming into the night.
    None of this negates the need for change. We must change what, when, and how we teach to give all children maximum opportunity to thrive in a post-industrial society. But educators cannot do this alone; these changes can occur only with the understanding, trust, permission and active support of the surrounding community. I know this because the most important thing I have learned is that schools reflect the attitudes, beliefs and health of the communities they serve, and, therefore, to improve public education means more than changing our schools, it means changing America.
    by Jamie Robert Vollmer
     
     

    0
    #95253

    Tim Folkerts
    Member

    Jackey,
    Unfortunately, there is no formula for how ANSI/ASQ Z1.4-2003 (which replaced ANSI/ASQ Z1.4-2003 which replaced MIL-STD-105E) calculates accept/reject numbers.  In many cases, the numbers seem to be rounded off more for convenience than for any sort of mathematical rigor. 
    I was trying to figure it out OC curves based on sample size, AQL, & sampling level.  I could calculate the odds of accepting a lot with a specific quality if you entered the AQL and looked up the sample size.  I was expecting to find something like ” for Level II, normal inspection, there is a 95% chance of accepting a lot whose true defect rate matches the listed AQL.”  I’m pretty clever at math, but I couldn’t find any consistant rules.
    It sounds more like you are after something like ” Hey, Excel! I have a shipment of 6000 pieces that I want at AQL of 1 for Level II inspection.” And Excel will say “test 200 peices and get no more than 5 defectives”.  Again, you won’t find a simple rule for eather the sample size or the reject number.  The best advice I have is to find another program that already does this (there are several available for a variety of prices via the internet).  The only other choice seems to be to use a lot of IF statements :-(   
    Tim
     
     
     
     

    0
    #95248

    Tim Folkerts
    Member

    Gabriel, 
    I was trying to go a level or two deeper than your coin example.  Let me see if I can explain it a little better. 
    Suppose you have 20 sets of blocks, and each set has lots and lots of blocks.  All 20 sets have an average length of 10.000″.  The first set has a st dev of 0.100″ and the lengths follow a normal distribution.  The next set has a st dev of 0.200″, and so on up to the 20th set with a st dev of 2.000″.
    Now you go to the first set of blocks and draw out a random group of 30 blocks.  The true st dev is indeed always 0.1″, but you won’t always get 0.1″.  Now repeat this 10,000 times (equivalent to the monte carlo simuilation I did).  Of all the different st devs you get, how many will happen to be as large as 1″ (or more specifically, how many are in the range 0.950″ – 1.050″)?  It turns none were that large. 
    Now go to the next set of blocks, with a st dev of 0.2 and repeat the previous paragraph.  Then repeat for all the other sets of blocks. 
    For every one of these 20 x 10,000 200,000 experiments, I will get some st dev.  One might be 0.234″, the next might be 1.435″, the next 1.010″.  I went through and picked out just the experiments were I estimated the st dev to be in the range of 0.95 to 1.05.  This happened to include 10021 of the 200,000 total experiments. 
    Now I can ask the question “If I did indeed observe the st dev to be ~1.0″, which set of blocks did the data likely come from?”  That is where the table comes in.
    True           OddsSt Dev
    0.6          0 / 100210.7        31 / 100210.8      615 / 100210.9     2155 / 100211.0    2636 / 100211.1    2386 / 100211.2    1296 / 100211.3     558 / 100211.4     223 / 100211.5       77 / 100211.6       29 / 100211.7       11 / 100211.8        4 / 100211.9        0 / 10021
    The odds it came from the blocks with a true st dev of 1.0 is 2636/10021 = 26.3%.  The odds it came from a set a block with a st dev of 1.5 or greater is (77+29+11+4)/10021 = 1.2%. 
    I completely agree with you when you say “In other word, who cares about the chances to get a bad standard deviation in the sample given that the actual standard deviation is good. The real risk is that you get a good standard deviation in the sample, and assume that the actual one is also good, given that it is not.”
    I was trying to simulate what you call the “real” risk.  I did 10,021 experiments where I found a “good” st dev of 1.0.   Some of these came from experiments where the “real” st dev was better than I thought, BUT some of these actually came from experiments with a “bad” st dev.  Specifically, 1.2% of these came from a distribution where the st dev was at least 1.5 times worse than I thought.
    I hope that clarifies what I was saying.  :-)
    Tim
     

    0
    #95205

    Tim Folkerts
    Member

    I don’t know that I want to get in the middle of what is apparently a continuing debate between Reigle & Statman.  I have a degree in Math, but I am self-taught in stats and six sigma.  Mostly, it is just an interesting topic and one that I want to understand.
     
    I decided to try a different Monte Carlo Simulation.  I ran 30 columns by 10,000 rows of random normal data (Minitab), with mean = 0, st dev = 1.0.  For each of the 10,000 rows I calculated the st dev.  I counted all of the rows where the st dev was close to 1 (0.95 <= st dev < 1.05).  In this case, 2636 fit the criterion.
    Then I repeated the process using standard deviations of 0.6 to 1.8 in steps of 0.1.  The results are ass follows:

    0.6

    0

    0.7

    31

    0.8

    615

    0.9

    2155

    1

    2636

    1.1

    2386

    1.2

    1296

    1.3

    558

    1.4

    223

    1.5

    77

    1.6

    29

    1.7

    11

    1.8

    4
      (By coincidence, there are very nearly 10,000 results; 10,021 to be precise). 
    This could be interpreted to say “of all the ways to calculate a st dev of 1.0 from a sample of 30 peices, here are the odds that it came distribution with a specific ‘true’ standard deviation.”  In this particular case, there turns out to be approximately a 0.5% chance that the calculated st dev of 1.0 could have come from a distribution with a ‘true’ st dev greater than 1.5.
    Certainly, this approach is not perfect.  I could run more than 10,000 rows.  I could look at results closer to 1.0.  I could make finer steps in the st dev.  I could try something other than n =30.  Perhaps fundamentally, the biggest question is whether it is legitamate to assume a uniform step size in st devs as the universe of all possible st dev.  Perhaps a geometric series would be more appropriate.  (I have no idea if such a study has been published, but it seems someone must have done this before.)
    So, in this particular case, if you observe a st dev of 1.0, you can be quite sure that the true st dev was 0.7 – 1.5.
    Cheers,
    Tim
     

    0
    #95188

    Tim Folkerts
    Member

    Mostly I was interested in clarifying the question.  You stated that the engineering specs were 1.240 and 4.976; you implied (but didn’t specifically state) that the measured production specs were the same.  I just wanted to point out that I was using those numbers as the measured values and therefore as the basis of my calculations  (We all know that what the engineer asks for and what gets produced are often two different things!). 
     
    Beyond the simple mathematical need to know the measured values, there are of course other challenges as you point out – the physical challenge of keeping the process performing as desired and the statistical challenge to get a good estimate of this performance.  I don’t pretend to know how well the postulated 1.5 sigma shift emulates either of these difficulties in real life. 
     
    As you state “Perhaps one of the design goals should be to devise a set of tolerances that are “robust” to shift and drift.”  It seems that you can always make the process more robust in several ways:
    1) you can create a design that is tolerant of variation.  2) you can use better methods of manufacture to reduce variation.3) you can measure a lot to catch changes and use feedback to correct the problems.
    All of these cost money, so you have to balance the costs vs the benefits to decide how to most effectively “robustify” a process.
     
    Tim
     

    0
    #95174

    Tim Folkerts
    Member

    I always like a good challenge,so I’ll give it a shot… 
    We were told

    The envelope specification was given as 4.976 +/- .003 inches. Inside of this envelope are 4 parts, where each part was specified as 1.240 +/- .003 inches. The same NC machine makes the envelope and the 4 parts. The parts are to be randomly selected for assembly. Based on a process sampling of n=30 production parts, the process standard deviation was determined to be S=.001.
    (I’m assuming the process was pretty well centered at 1.240 and 4.976, rather than somewhere near the limits of the engineering specs. If that is wrong, then the rest is tainted.)
    QUESTION 1: What is the probability that the assembly gap will be greater than zero?
    When combining standard deviations, find the RMS (root mean square) sum. In this case the combined st dev is (0.001^2 +0.001^2 +0.001^2 +0.001^2) = 0.002. Thus the combined 4 parts are 4 *1.240 = 4.960 with st dev of 0.002. The envelope of 4.976 is (4.976 – 4.960)/0.002 = 8 st dev away from the mean, so basically no parts will fail – Excel won’t even give an answer other than 0. (This assumes the st dev was actually found to be 0.00100. Since only 1 digit is quoted, it would also be reasonable to assume that the st dev was somewhere between 0.00050 and 0.00150.)
    QUESTION 2: Could it be that the given process standard deviation is biased due to random sampling error and, if so, to what extent?
    Yes – otherwise you wouldn’t ask :-). A guide I found says that for 30 parts, the calculated standard deviation has itself a standard deviation of ~13%. That is, it you kept drawing random samples of 30 and measuring the st dev, the spread in these numbers would produce a st dev. If you take a 2 st dev (26%) range for a 95% certainty, then the st dev of 0.00100 could actually be in the range of 0.00076 to 0.00126.
    QUESTION 3: Given such error in the process standard deviation, what is the statistical worst-case expectation in terms of “probability of interference fit?”
    Now the combined four parts have a st dev up to (0.00126^2 +0.00126^2 +0.00126^2 +0.00126^2) = 0.0025. This is still 6.4 st dev from the envelope, for a failure rate of 8E-11.
    QUESTION 4: If the design goal is no more than 3.4 interferences-per-million-assemblies, what component level standard deviation would be required to ensure the goal is met given the presence of random sampling error?
    As all six sigma practitioners know, 3.4 PPM is 4.5 st dev from the mean. Thus the st dev of the combine parts should be no bigger than (4.976 – 4.960) / 4.5 = 0.0036. The st dev of each part should be no bigger than 0.0036 / (4^0.5) = 0.0018. To get our 26% cushion, the st dev should be measured as no more than 0.0018 / 1.26 = 0.0014
     
    QUESTION 5: From a design engineering point-of-view, and in terms of the Z.gap calculation, should the potential random sampling error be factored into the numerator term as a vectored worst-case linear off-set in the component means or in the denominator term of the pooled error. Either way, what is the rationale?
    I don’t know!
     
    PS: This problem also has a Monte Carlo solution that will confirm the computational solution.
    4 sets of 1,000,000 random data points in Minitab (mean 1.24, st dev 0.0018) gave no values over 4.976, but several close to 4.976, so I think I’m on the right track.
     
     

    0
    #95133

    Tim Folkerts
    Member

     
    On the contrary, I think it is a very good question.  It is easy to get caught up in six sigma – in the jargon, in the process, in the statistics.  The challenge is to let the tools help you, not let the tools control you.
     
    You have to decide what is important enought to measure and improve, and you have to decide what is an “opportunity”.  If getting the total on the bill correct is important, then by all means count the DPMO for the total.  After all, if all ten customers send back a bill because each one has a defect, that is a lot worse than if one customer sends back a bill because it has ten errors.One the other hand, if you are trying to see how often the clerical staff mistypes a figure, then you would be better off counting each number entered as an opportunity.  That then gives a good way to estimate how many errors you might expect on a given bill.  (Of course, that assumes your errors are random, which is often not the case.)
     
    The same principle applies to severity of mistakes.  Here you might consider FMEA (failure modes and effects analysis) to decide how important a problem is.  Or estimate the “cost” of an error.  A typo in a name on a bill doesn’t cost much.  A wrong number costs the time of your acct. dept. to find the old bill, verify the error, fix it, mail a new copy, receive the payment late, send a salesman to smooth things out, … .  Multiply the (defect rate) x (cost per defect) to determine the impact on the bottom line.
     
    One size does not fit all.  Not all defects were created equal.  Man does not live by six sigma alone. 
     
    Tim

    0
    #94923

    Tim Folkerts
    Member

    Probability Plots are a quick, visual way to test the distribution.  In Mintab, go to the menu “Graph” then “Probability Plot”.  Select the data you want to test and then choose the type of distribution you want to test.  Minitab will then create a plot.  If the distribution you chose was right (or at least close), the plot will come out a straight line.  If it’s not straight, try plotting again with another distribution.  Hopefully you will eventually find one that looks good. 
    (You can also use the menu “Calc”, “Probability Distributions” to generate some sample data to test.  Create a few different sets of sample data using different distributions to get a feel for what a good plot looks like.)
     
    Tim
     

    0
    #94922

    Tim Folkerts
    Member

    Tchebysheff’s Theorem says “The fraction of a population occurring within k standard deviations of the mean is at least  1 – 1/k^2.”
    People often state rules like “68% of data points fall within 1 standard deviation of the mean” and “95% of data points fall within 2 standard deviations of the mean”.  These are useful, but only really apply to “normal” distributions (i.e. bell-shaped histograms).
    Tchebysheff’s Theorem applies to the wort case scenario.  It says that you can be guaranteed that at least 1- 1/k^2 will be within k standard deviations.
    For k = 1, you can be sure that at least 1 – 1/1^2 = 0% are +/- 1 sigma.  (The 68% you expect for a normal distribution is certainly >0%)
    Consider a set of data with “1” showing up 9 times, “-1” showing up 9 times and “0” showing up twice.  The mean is 0, sigma is 0.97, so all of the “1” and “-1” are more than 1 sigma away from the mean.  Only 2/20 = 10% are within +/- 1 sigma.  10% is bigger than 0%, but it is no where near the 68% that you would predict for a “normal ” distribution.
    For k = 2, you can be sure that at least 1 – 1/2^2 = 75% are +/- 2 sigma.  (The 95% you expect for a normal distribution is certainly >75%)
    Now consider a set of data with “1” showing up 2 times, “-1” showing up 2 times and “0” showing up 16 times.  The mean is 0, sigma is 0.46.  Here only 16/20 = 80% are within +/-  2 sigma.  80% is bigger than 75%, but it is nowhere near the 95% that you would perdict for a “normal ” distribution.
    Does that help?
    Tim
     
     
     

    0
    #94668

    Tim Folkerts
    Member

    I’m not sure why you would want a special control chart.  “Ovality” is just a continuous variable, so X-bar R or X-bar s should work fine.
    The deeper question is how to estimate ovality or to estimate diameter..  Measuring two places at random seems like an ineffective choice.  It is quite possible to be way out of round, but choose two points that are the same diameter.  Why not find the max and min diameters and get a much truer estimate for ovality or for the average diameter. 
     
    Tim
    P.S.  “Ovality” = (spread in diameter) / (average diameter) for anyone who is interested.   

    0
Viewing 56 posts - 1 through 56 (of 56 total)