iSixSigma

Tolerance Stack-up Analysis

Six Sigma – iSixSigma Forums Old Forums General Tolerance Stack-up Analysis

Viewing 24 posts - 1 through 24 (of 24 total)
  • Author
    Posts
  • #30973

    woey
    Member

    Anyone can help me with the following question: Two components are assembled together linearly, and the assemnly length is 40 +/- 1 mm. What are the tolerance for each component so that Cp = 2.0, and Cpk>1.5? You can make any assumption that you want, I just want to know your approach to problem like this. Thanks.

    0
    #81358

    Dr. Steve W.
    Participant

    stack up and Cp (or Cpk) are different animals. Having the right tolerance stackup does not ensure good Cp and/or Cpk. The latter are determined by your tolerance spec., your assembly process mean and process variation. Tolerance spec allocation is typically finalized at product design stage while process capability won’t be known until you collect some products from your manufacturing line. As a PD person, you can set target for process mean and upper bound for process variation so that you will have a good Cpk.

    0
    #81374

    Gabriel
    Participant

    Assumptions: Both components have the same symetrical tolerance arround their nominal value, and both nominal values sum 40. The manufacturing process of both components have the same variation and meet the capability requirement of Cpk not lower than 1.33. The assembly process adds no extra variation on the toal length (i.e L=L1+L2)
    Because Cp=2 and tolerance range=2mm, then S=1/6mm for the assembly. Also, because Cpk>1.5 the average must be at least 4.5 S = 0.75mm away from the closest limit, so the maximum shif of the average from the target value (40) is 0.25mm.
    To assure that the assembly average will not be more than 0.25mm away from the target, the average of the components must not be more than 0.125mm away from the target.
    S^2 = S1^2 + S2^2= 2 x S1^2  ==> S1 = S/sqrt(2) = 0.11785mm (this is the standard deviation of the manufacturing process of the components). The average of the manufacturing process of the components shall be at least 4 x S1 = 0.47140mm away from the specification limit to assure a Cpk>1.33.
    If we take “Distance form the specification limit to the average” = 0.47140mm and “Distance form the average to the target” = 0.125mm; then “Distance from the specification limit to the target” = 0.125mm + 0.47140mm = 0.59640mm (let’s say 0.6mm?)
    So a tolerance of +/-0.6 for the components will assure a Cp=2 and Cpk>1.5 for the assembly, if the components are manufctured with a Cpk>1.33.
    Note however that you said Cp=2, and not Cp>2, that means that S1=0.11785mm and not smaller. If S1 was improved (reduced), you could offset more and more the average of the components mantaining a Cpk>1.33, In the limit, with a very low variation in the components (S1) you could keep a Cpk>1.33 withh all the distribution very close to the specification limit (let’s say +0.6mm) and the sum of components will be arround 40 +1.2mm (i.e. out of tolerance)

    0
    #81393

    woey
    Member

    Gabriel: Nice presentation. I like your approach to the problem. However I have a dumb question-Why the max shift of the average from the target value (40) is 0.25mm?
    Thanks!

    0
    #81416

    Gabriel
    Participant

    This is from the previous post:
    “Because Cp=2 and tolerance range=2mm, then S=1/6mm for the assembly. Also, because Cpk>1.5 the average must be at least 4.5 S = 0.75mm away from the closest limit, so the maximum shif of the average from the target value (40) is 0.25mm.”
    I guess that you had already seen that, but you are still not convinced, so let’s demonstrate it:
    Cp=(USL-LSL)/6S ==> S=(USL-LSL)/6Cp=2mm/(6*2)=1/6mm.
    Cpk=min(USL-Xbar; Xbar-LSL)/3S or, what’s the same (only valid if the average is within specification limits, which is the case for a Cpk>1.5):
    Cpk= |Xbar-closestSL|/3S ==> |Xbar-closestSL|=Cpk*3S, and because Cpk>1.5 then |Xbar-closestSL|>1.5*(3*1/6mm)= 0.75mm (where closestSL is the specification limit that is closer to Xbar)
    As long as Xbar is somewhere between the target and the closestSL (which is allways the case if Xbar is within specification limits and the target is in the middle of the tolerance range, both things true in this case) we can write:
    |target-closestSL|=|target-Xbar|+|Xbar-closestSL| ==> |target-Xbar|=|target-closestSL|-|Xbar-closestSL|. Note that, regardless of which specification limit is the closest one, |target-closestSL|=1mm, and because |Xbar-closestSL|>0.75mm:
    |target-Xbar|<1mm-0.75mm=0.25mm
    This reads as “The distance from the average to the target is less than 0.25mm”, or “The max shift of the average from the target value is 0.25mm”
    I thought it was too long to put in the original post.

    0
    #94091

    Andy Sleeper
    Participant

    I agree with Gabriel’s assumptions and analysis to a point, although there is another important assumption that the two parts are independent of each other.  Without this assumption, the Root-Sum-Square (RSS) formula does not apply.
    Later in the analysis, I disagree with Gabriel’s approach.  In Gabriel’s analysis, to compute a tolerance limit for each component, he adds 4S = 0.471 to 0.125.  The 0.125 is to account for the 1.5-sigma shift on the assembly, not for each component.
    Using Gabriel’s result of ± 0.6 for each component, if manufacturing makes these with Cpk = 1.33, then each component may have a standard deviation as large as 0.15 (if 0.6 is 4S, then S = 0.15).  If each component has S=0.15, then the assembly will have S = sqrt(2)*0.15=0.21.  This results in a system Cp of 1.58, less than the desired 2.0
    I recommend a different approach:  Design the product to have Cp > 2.0 and Cpk > 2.0 using the best available information.  In general, when shift happens, the shift makes things worse than you expected during the design process.  We won’t add in the shift in the design process – manufaturing will do that on their own.
    As in Gabirel’s post, set the nominal values for each component so they sum to 40.  Then compute the system standard deviation, which has to be smaller than 1/6mm.  Each component standard deviation = 1/6 / sqrt(2) = 0.11785.
    If the components will be manufactured with Cpk > 1.33, then set the tolerances for ± 4* 0.11785 = ± 0.47
    But as a pessimist, I would have a hard time accepting that Cpk > 1.33 on these parts without some data to back things up and a good control plan.  Without data or control plan, I would rather design so that quality is still good even if all components are uniformly distributed within their tolerance limits.  With a uniform distribution, the limits of the distribution are ±sqrt(3) * the standard deviation.  So in my more pessimistic (realistic) world, I would set the tolerance for the components at ±sqrt(3) * 0.11785 = ±0.20.
    That’s my approach – if you have thoughts, please let me know.

    0
    #94129

    Gabriel
    Participant

    Andy, you are right at some things, missunderstood other, and are wrong in others. But thank’s to your post I’ve seen my own mistakes.
    – I forgot the independence assumption. I also forgot the normality assumption.
    – The 1.5 sigma shift in the assembly must be due to shifts in the components, as one of the assumptions was that the sum of the nominal of the components was the nominal of the assembly (i.e. centerd components = centered assembly). I had proven before that if, in the assembly, Cp=2 and Cp>1.5 then the shift of the average from the nominal (center of the) specification had to be 0.25 at most. Then I took as if the average of each component would be shifted 0.125, since 0.125+0.125=0.25.
    Two mistakes: a) Other combinations of shifts of the components (such as 0.1 and 0.15, or 0.3 and -0.05) also lead to a shift of 0.25 in the assembly. b) the requirement was that the shift was “at most” 0.25, but I coverted it to “equal” to 0.25. By doing it I converted the input of “Cpk>1.5” to “Cpk=1.5”.
    If you add to the list of assummtions that any shift of the mean of the assembly is equally split among the components and that Cpk=1.5 (not >1.5) then it works.
    With that assumption, it is not possible that the components have a standard deviation of 0.15 because, with a Cpk of at least 1.33, this would happen only if the process was centered and Cp was 1.33, but  that would lead to a centered assembly too, which violates the Cp=2 Cpk=1.5 assumption that needs an off-centered assembly.
    If, as you said, there is no data to back up (and modelate) the process distribution, the “uniform distribution” assumption is far far far from “pesimistic” (you will see why) and “realistic” (Have you ever seen a process that is uniformly distributed covering all the tolerance? So how about to assume that wiothout data to back it up?). The most unfavorbale (and still very realistic) case is that the process is very capable but very off-centered. As I said in the previous post, if the variation of the components is very small but the preocess is very off-center, while all components will be in tolerance, they will also be all close to one specification limit. In this case, forget statistics and the tolerance for the components must sum the tolerance for the assembly (if for the assembly it is +/-1, then for the components it must be +/-0.5 assuming that it is the same for both components).
    Because things such as Cp and Cpk were involved in the problem, I pointed the solution to statistical tolerancing. Statistical tolerancing needs either some knowdlege of the porcess you are tolerating or, instead of putting specifications for the individual, specifying the distributiuon (a tolerance for the average and a tolerance for the standard deviation).

    0
    #94142

    Niraj Goyal
    Participant

    This is a very interesting problem. Hope what I suggest helps you.
     
    Essentially additive tolerances could be analysed in the following way to arrive at the process capability of the assembly:
     
    Assume the first piece has a mean dimension = x
    Second piece has a mean dimension = y
     
    Assume the variance of x is v1                               
    And the std deviation of y is v2
     
    Then assuming normal distributions and that x and y are independent i.e. the production of one piece has nothing to do with the production of the second piece:
     
    The mean of the assembled piece X = x+y
     
    And the variance of the assembly is V = v1 + v2
     
    In base the individual pieces are of different lengths then you can use the weighted average and weighted variance formulae.
     
    This relationship that enables you calculate the mean and variance (and therefore sigma) of the assembly from the distributions of the individual piece dimensions.
     
    The calculation of the Cp etc for the assembled piece then follows the standard methods.
     
    You can obviously work backwards to set up the process limits for individual pieces – one could be much tighter than others to achieve the end result depending upon your process – to control the finished piece dimensions.      
     

    0
    #94152

    Gabriel
    Participant

    “Then assuming normal distributions and that x and y are independent i.e. the production of one piece has nothing to do with the production of the second piece:
     
    The mean of the assembled piece X = x+y
     
    And the variance of the assembly is V = v1 + v2″
     
    Normality is not a requirement for this to be true. Let X and Y be to independent variables, then µ(X+Y)=µ(X)+µ(Y) and V(X+Y)=V(X)+V(Y), even if X and Y are normal, uniform, discrete, continous, have both the same distribution, or not. The only requirements are independence and that the averages and variances are defined (there are some distributions for which they do not exist)

    0
    #94161

    Andy Sleeper
    Participant

    Gabriel,
     
    It is a pleasure to discuss this topic with someone knowledgeable about the details of the methods.
     
    In two areas, this discussion highlights differences in opinion about how to apply tolerance analysis to real problems with incomplete information.  In these areas where facts are not in dispute, I respect your opinions and techniques.  I would like to discuss why I believe the way I do in these areas, and invite others for their opinions.  Also, I would like to answer a couple of other points raised by your message.
     
    Problem 1:  How should I account for unexpected sources of shifts and variation, in particular, the 1.5-sigma shift which is a Six Sigma convention?
     
    Option 1:  Assign the 1.5-sigma shift to the components in the system.  To apply this, allocate the shift to each component, and account for it in the tolerances for each component.
     
    Option 2:  Assume each component is centered and assign the 1.5-sigma shift to the system itself, which would include the net effects of component shifts, plus assembly or system issues which are not anticipated by the transfer function (in this example, the transfer function is Y = A + B)
     
    One problem I see with Option 1 is it is unclear how to account for the shift in the component tolerance.  For instance, if the unshifted 4-sigma tolerance is 0.475, you could add the 0.125 shift for a total tolerance of 0.600.  But then if the part is manufactured with a “Cpk >= 1.33” goal, the standard deviation could be as large as 0.15 with zero shift, and this is too much variation.  In a second way of thinking, we could make the component tolerance smaller, to 0.475 – 0.125 = 0.35.  With this tolerance, the part could be manufactured so that 4-sigma = 1.33) and then an additional unexpected shift of 0.125 could be added without causing a system problem.   It is confusing and unsettling that I don’t know whether to add or subtract the shift allowance. 
     
    Another problem I see with Option 1 is that it does not easily adapt to non-linear problems.  With a nonlinear transfer function, the allocation of shift to each component may be impractically difficult.  If the allocation can be done correctly, the direction of shift for each component may affect the results.  Which way should I shift each component? (more on this later)
     
    I prefer option 2 because it accounts for variations in components, process, varying manufacturing interpretations, and other problems I can’t predict, in one step.  Also, option 2 may be applied consistently to linear and nonlinear tolerance situations.  And the best part of option 2 is, it’s easy.  Simply design the system for Cpk >= 2.0.
     
    Problem 2: If I know nothing about a component, how can I do tolerance analysis?
     
    Option 1:  Go get data on all components before analyzing the effect of tolerances.
     
    Option 2: Assume that each component is normally distributed with Cp = 1 and Cpk = 0.5 (as in M. Harry’s mechanical tolerancing book)
     
    Option 3: Assume that each component is uniformly distributed between tolerance limits (Cp = Cpk = 0.577)
     
    In an ideal world, all data on all parts would be available to engineers designing new products.  But I’ve never worked in a world like that.  Typically, there is some data but not a lot.  Option 1 would be nice, but it’s not practical today.  If data is available, then by all means, use it.  But with no data, I prefer to do the analysis using a conservative assumption, so I can do the analysis early in the project.  Then, if I need data on certain components, I can go get data on those few components that really matter, and refine the analysis.
     
    So what conservative assumption is best?  Option 2 and 3 are just options – many other assumptions are possible.  Option 2 works fine for linear stackups, but for nonlinear systems, the direction of shift may change the results.  If you have 10 components in a nonlinear system, each component could be shifted up or down.  Should I evaluate all 1024 combinations?  I could do a sensitivity analysis and choose shift directions with some thinking, but this is still a lot of work.
     
    Option 3 appeals to me because it expresses, “I believe this component is in tolerance, but I don’t know anything else.” 
     
    Certainly you are right, a uniform distribution is not worst-case.  We’ve all seen distributions which are bunched up against one limit, or are bimodal.  But usually, Cpk’s of real parts are better than 0.577.  I disagree with assuming that Cpk >= 1.33 unless I know that there is a strong SPC activity in place.  Even then, I am only comfortable assuming Cp>= 1.00 without actual data to prove it’s better.
     
    To answer your question, no, I’ve never seen a perfectly uniform distribution.  But I have seen a lot of non-normal distributions.  With no data, I feel safer assuming uniform than normal.  That’s my opinion.
     
    Bottom-line on this point: We need some sort of default assumption to use with no data, and this assumption should be worse than real distributions, in most cases.  The shape of the distribution matters less than its standard deviation.
     
    Also, I must note that the Root-Sum-Square formula for computing the variance of a linear stackup does not require the normal distribution assumption.  It only requires independence and the existence of a standard deviation.  You made this point nicely in the message you posted today.
     
    Finally, Gabriel, I whole-heartedly endorse your final paragraph on statistical tolerancing.  All this complexity and difficulty we have been discussing arises because we are trying to impose statistical thinking on a world that thinks in limits.  Twenty years from now, when we communicate statistically, we will no longer need tolerance limits.
     
    Thanks for the discussion!
    Andy
     

    0
    #94185

    Gabriel
    Participant

    Andy,
    Being a “Sleeper”, you seem to be far more awake that what your alias tells (I am assuming that “Sleeper” is not your real name, but it could be). However, I must tell you that you are wrong from the first sentence on (ok, only the first sentence):
    “It is a pleasure to discuss this topic with someone knowledgeable about the details of the methods”
    Who? Me? No, I am not knowledgeable about the details and methods of statistical tolerancing. I just tried a solution to the original poster’s question, who seemed to have not many feedback by that time. For that I used my limited statistical knowledge, my experience in the industry in the field of Quality, and a little bit of common sense (which seems to be not so common and might not make so much sense). Then I just answered your objections to my post by sayng why had I done what I had, but it does not mean that it was Ok in the first place.
    My position is typically that of the third party to whom both product engineering and production complain because I am splitting hairs about a “stupid” irrelevant non conformity, when in fact that is just that the “stupid” process from production did something that was different from the “stupid” specification from product engineering.
    I’ve also went through production people saying, regarding a set up where they ar trying to put the thing in tolerance, “it is in the limit, but in tolerance”, with me trying to explain that if the average is in the limit then about 50% is out of tolerance.
    With that limitations on sight, I will try to go on with the discussion.
    Problem 1:
    – I just do not belive in the “magic” 1.5 sigma shift. My wrong and unstated assumption that Cpk=1.5 (when it input was Cpk>1.5) happened to be a 1.5 sigma shift only because the other input was Cp=2.
    – I have not clear answers for the rest of the problem 1, but just a final thought:
    If the process variation is S, then the Cp will NOT be what you decide it to be via product design (2 or whatever). It will just be T/S, where T is the tolerance width in the product design. Saying “I prefer to design the system for a Cpk>2” will not make the process have a Cpk>2. If you reduce the tolerance, Cpk will worsen, not improve.
    The real chalenge in product design is to make a product with tolerances that the actual process can handle, without penalizing the product performance. Unless you want to design a new process for every new product.
    Problem 2:
    This part is more comlicated to explain.
    To begin with, you say that a uniform distribution that covers all the tolerance has Cp=Cpk=0.557. That is true but, what is the value of that? If I had to choose betwen parts that are uniformily distributed with Cp=Cpk=0.557 or normally distributed with Cp=Cpk=1, I don’t think twice. I keep Cp=Cpk=0.557 which assures me 0 PPM (against 2700 PPM for Cp=Cpk=1). It is even better that the famous 6 sigma with 3.4 DPMO!
    To go on with this, lets use inverse logic with an example:
    An assembly is made of n parts P stacked up. The length of the assembly is the sum of the length of the components, which has a tolerance of L±a. You have one process that manufactures the part P, and every n parts P you assembly 1 assembly. The assembly process does not add extra variation to the length. Design the length part P, M±b by:
    a) worst case stack up tolerancing.
    b) Statistical tolerancing to get Cp=2 (let’s forget about Cpk by now), assuming that the part P is uniformily distributed covering all the tolerance.
    c) Statistical tolerancing to get Cp=2, asuming that the components’ process has Cp=1
    a) M=L/n, b=a/n
    b) M=L/n
    Cp=2a/6sigma ==> sigma=a/3Cp=a/6
    sigma=sqrt(sum(sigmaci^2)), where sigmaci is the standatd deviation of the distribution of the ith component, with i from 1 to n. Because all n components belong to the same distribution, sigmaci = sigmac (not depending on i), so:
    sigma=sqrt(sum(sigmac^2))=sqrt(n*sigmac^2)=sigmac*sqrt(n).
    Because the components are uniformuly distributed in M±b, sigmac=b/sqrt(3), so:
    sigma=b*sqrt(n/3)=a/6 ==> b=sqrt(3/n)*a/6
    c) M=L/n
    As before,
    Cp=2a/6sigma ==> sigma=a/3Cp=a/6
    sigma=sqrt(sum(sigmaci^2))=sigmac*sqrt(n)=a/6.
    Cpc=2b/6sigmac ==> sigmac=b/3Cpc=b/3, where Cpc=Cp of the component.
    b/3*sqrt(n)=a/6 ==> b=a/2sqrt(n)
    Summary of results:

    n
     b/a
     b/a
     b/a

     
      a)
      b)
      c)

    1
    1.000
    0.289
    0.500

    2
    0.500
    0.204
    0.354

    3
    0.333
    0.167
    0.289

    4
    0.250
    0.144
    0.250

    5
    0.200
    0.129
    0.224

    6
    0.167
    0.118
    0.204

    7
    0.143
    0.109
    0.189

    8
    0.125
    0.102
    0.177

    9
    0.111
    0.096
    0.167

    10
    0.100
    0.091
    0.158

    11
    0.091
    0.087
    0.151

    12
    0.083
    0.083
    0.144

    13
    0.077
    0.080
    0.139

    14
    0.071
    0.077
    0.134

    15
    0.067
    0.075
    0.129
    As seen, case b) (“your” stetistical tolerancing) is more restrictive than “worst case” tolerancing for n up to 12. What’s the point ot put tolrances that are tighter than “worst case”? Is it to prevent from a case that is worse than worst? The xtreme case is n=1, where you requier to the part a tolerance of less than30% of what is needed, even when a tolerance of 100% all parts would be in tolerance, due to the uniform distribution and regardles of such a low Cpk.
    The idea of statistical tolerancing is to avoid putting tolerances that are too tight only to assure that in a worst case combination the system would still be in tolerance, when that worst case combination is in fact extremely unlikely.
    In this example the shift is not considered. If you do, the uniform distribution gets too permisive, specially in those cases where no SPC activities or other means to control the average and variation of the process are in place, as you said. In such scenario, why do you think that any reduction in variation (from what is specified) will put you further from both specification limits? Imagine a process that is uniformly distributed but the range of that distribution is half of the specification. To my experience, if only conformity chech is done as control mean, the most probable is that production will make the set-up just until they are in tolerance, and then start production, without taking care for the average. With that you will have a distribution with less variation but with more shift, that will be worse.
    To say it in other way: Do you really think that if you take one part and it is very close to one control limit, the probability of the next part to be anywere within specification is independent of this new knowdlege (as it would be if it was a uniform distribution covering all the tolerance)? No, most probble is that they are working close to that limit. Uncountable times I’ve seen in samples that all parts are with little variation regarding the specification, but close (or even beyond) the specification limit.
    The problem is, you cannot define statistical tolerances if statistics does nothing to do with the manufafturing process too.
    Fianlly, I strongly agree with your last sentene. There will be one time when the specification will be “avrage target X, variation target 0, keep as close from this two targets as possible”. The we will be in the Cpm world. While we keep speaking of Cp and Cpk, we will keep speaking of worst case stack ups.

    0
    #94188

    Mikel
    Member

    Amen on the Cpm.

    0
    #94223

    Andy Sleeper
    Participant

    Gabriel,
     
    Why yes, my real name is Sleeper.  It has provided a lifetime of amusement and conversation.  Just ask my kids, Light, Deep and Perfect.  J  Had to pay a royalty to use that last one.   But I digress.
     
    That’s a nice analysis of (a) Worst-Case vs. (b) component Cp=Cpk=0.577 vs. (c) component Cp=Cpk=1.  You are right, that statistical tolerancing may require worse-than-worst-case design.  This is a concern for a lot of people.  Your analysis shows that option b requires worse-than-worst-case with n<12, while the crossover for option c is n<4 in this particular case where n identical, independent parts are stacked. 
     
    I claim that worse-than-worst-case design is often required, because:
    – Often our systems need to have better capability than our components do
    – Measurement systems are never perfect
    – Processes often don’t measure every part
    – That old catch-all favorite, Shift Happens.  We have to have room on both sides to handle the shift.
      
    The engineer always has the option to go get actual data and not use any default assumptions.
     
    The uniform default assumption (Cp=Cpk=0.577) is only a benchmark which makes sense to me, when there is no data to prove that the components are better.  I have no problem with people using Cp=Cpk=1 as their benchmark, as long as they understand and accept the risk of making that assumption.
     
    By the way, you are astute to observe that many processes make correlated parts, and then the RSS formula no longer applies.  Using RSS can be dangerously wrong when a bunch of identical parts are stacked up.  I have an example of this if anyone is interested.
     
    But perhaps that’s enough for now.  It’s kind of funny that the original post was asking about the simplest possible problem, Y = A+B, and that spawned all this talk. 

    0
    #94709

    Williams
    Participant

    Andy,
    I would like to see RSS method as referenced in your post. Please email to me.
    Thanks,
    Robert

    0
    #94710

    Williams
    Participant

    [email protected]_msn.com
    Please ignore NOSPAM_

    0
    #94717

    Andy Sleeper
    Participant

    Robert,
    The Root-Sum-Square method is basically as follows:
    If Y = X1 + X2 + … + Xnand the Xi are all mutually independent, with mean mu-i and standard deviation sigma-i, for i  = 1 to n,
    then the mean of Y is mu-Y = mu-1 + mu-2 + … + mu-n
    and the standard deviation of Y is sigma-Y = sqrt(sigma-1^2 + sigma-2^2 + … + sigma-n^2)
    This last equation is where “Root-Sum-Square” comes from
    For example, what if I wanted to know the length of a stack of 50 pennies for a machine I was designing.  Suppose the mean thickness of a penny is 1.00 mm with a standard deviation of 0.01 mm, which are total guesses on my part.
    If the thicknesses of the pennies are independent, then the mean thickness of the stack of 50 is 50.00 mm, with a standard deviation of sqrt(50)* 0.01 mm = 0.0707 mm.  So if I designed a cavity width of at least 50 + 6*.07 = 50.42 mm to hold 50 pennies, that should be a “Six Sigma” design! I can set the final tolerance of the cavity at 50.50 +- 0.08 mm and be done.
    Now if these pennies were random pennies from my collection, it’s reasonable to assume that they are independent.  (Assume pocket gunk has zero thickness.)
    But what if the pennies were just made at the Denver Mint, and I have 50 consecutively stamped pennies.  Are they independent?  Maybe not.  Pennies that are made at one time are more likely to have the same thickness than ones made at different times. 
    If all 50 pennies have the same thickness, they are not independent, they are perfectly correlated.  In this case, the RSS formula no longer applies.  When all pennies are identical, the standard deviation of the stack of 50 pennies is 50 * 0.01 = 0.5 mm.  My “six sigma” cavity design of 50.5 +- 0.08 mm is now less than a one-sigma design.
    Oops.
    This is a made-up example, but I have seen this problem in real life in the design of a clutch with a stack of stamped pieces.  These pieces had highly correlated thicknesses because they were made from large sheets of raw material, and the pieces used in one clutch are likely to have come from the same region of the same sheet of raw material.
    The moral is: real parts are often correlated to each other, and the RSS formula does not apply to these cases.

    0
    #94722

    Anonymous
    Guest

    Andy:
    I believe you have raised a really important issue and have provided a good example. It is about time that we got back to some real statistical engineering.
    Cheers,
    Andy 

    0
    #148953

    jeff bates
    Participant

    Hello all,
    I have recently been given the task of writing a procedure for installing and aligning a relatively long somewhat complex steal structure.  I have been asked to compile a way to deal with tolerance stack up among the multiple components.  I am new to the area and am looking for some advice.  After reading through the previously realated postings I came across the one that discusses tolerance stack up vs statisticle modeling.  I will be dealing with a realatively small amount of components about 30 in an x,y,z cooridinate frame of reference. My question is, what is the primary benefit of the statistical tolerancing,  is it simply to deal with larger numbers of sub-components, or is there some other mystical benefit that I need to learn?

    0
    #148957

    mand
    Member

    Why don’t you ask Mikel Harry … tolerance stack-ups is the way he came up with his drifting process averages !!! 
    Why have so many people believed his crap ?

    0
    #156650

    dinesh
    Participant

    Hi
    where could i get the Free study materials for Tolerance and Stackup analysis,,

    0
    #172275

    s.victor
    Member

    why we need tolerance stack up analysis?
    Any one can give the exact application of stack up analysis
     
    victor.s

    0
    #173421

    Chaitanya N.N
    Participant

    Variation is inherent in nature and nothing is perfect. No two parts are the same and to ensure your parts go together and assemble 100% of the time you MUST do tolerance stack-up. One great easy to use tool to perform this kind of activity is Sigmund from Varatech.

    0
    #178139

    K.GOVINDASAMY
    Participant

    you will get the some free tools in marakek
    even tril verson also avilable fro one month fro using
    you can instal in your machine and you can practice
    if you want any more infirmation let me know on thios
    i can help you on this
    Thanks
    K.Govindasamy
     
     
     

    0
    #185936

    surya
    Member

    hi ,
    can anyone help me to get the free study material for “tolerance stackup analysis”. i am very much fresh to this section . everything appears to me as greek and latin. pls someone guide for better understanding of tolerance stack up analysis especially for assembly of overlapping parts. moreover i want to understand better the concepts principles of GD&T . and also ANSI Y14.5 M stds. so how to get those free study material with some real time example. that would help me in better understanding? pls

    0
Viewing 24 posts - 1 through 24 (of 24 total)

The forum ‘General’ is closed to new topics and replies.