iSixSigma

t value with non-integer degrees of freedom

Six Sigma – iSixSigma Forums Old Forums General t value with non-integer degrees of freedom

Viewing 24 posts - 1 through 24 (of 24 total)
  • Author
    Posts
  • #33611

    Gastala
    Participant

    Hi
    In the MSA Manual test for bias you are required to look up a t value where the number of degrees of freedom is a non-integer (10.8 in the example in the manual).
    Is there any way of finding the exact value, apart from extrapolation?
    Excel truncates a non-integer value and I can’t see any way of doing it in Minitab. 
    Thanks
    Glen

    0
    #91175

    Statman
    Member

    Glen,
     
    The reason that the Degrees of Freedom are non integer is that the method is using a range estimate for the standard deviation.  When the range is used, the approximation (called the Patnaik approximation) involves a loss in degrees of freedom.  This loss can be as high as 25% with larger sample sizes.
     
    The determination of the degrees of freedom is not real straightforward as it is approximately 1/(-2+2*sqrt(1+2*(c.v.)**2/g)  where g is the number of groups and c.v. is the coefficient of variation of d2 (the range estimate constant).
     
    Since excel does not allow non-integer Degrees of Freedom and I don’t know what Gage R&R program that would have the Degrees of freedom as part of the package.  I would recommend that you use the root mean square method to estimate the standard deviation and the use typical t-test and degrees of freedom determination.  After all, why sacrifice degrees of freedom when you don’t have to.
     
    By the way, I have seen references to this “MSA Manual” in several posts.  Being just a simple statman, I do not have this and do not know how I could get one.  Could you lead me in the right direction?

    0
    #91180

    Gastala
    Participant

    Hi
    Thanks for the information.
    The MSA Manual Third Edition is available from the Automotive Industry Action Group http://www.aiag.org specifically http://www.aiag.org/publications/quality/fmea3.asp
    In Australia you can get it from the Federation of Automotive Part Manufacturers  http://www.fapm.com.au but its about three times the price!
    It is greatly expanded from the previous version so well worth checking it out.
    Regards
    Glen
     

    0
    #91182

    Statman
    Member

    Glen,
    Thanks for the reference
    Cheers

    0
    #91188

    Doc
    Participant

    As an FYI to those who do not have the MSA ref. manual, the example is doing a one-sample t-test of H0: mu=0 for the following data:
    {-0.4, -0.3, -0.2, -0.1, -0.1, 0, 0, 0, 0, 0.1, 0.1, 0.1, 0.2, 0.3 0.4}
    I assume that you see from the example in the MSA ref. manual that the standard deviation is calculated using the sample range:
    s = (max – min)/d*2      (where the * is superscript & 2 is subscript)
    You should also see that d*2 and the respective degrees of freedom are obtained from the d*2 table in Appendix C. Looking at the table, the number of subgroups (the rows) is 1, and the subgroup size (the columns – the sample size) is 15. This cell of the table gives a degrees of freedom of 10.8 and d*2 value of 3.55333.
    s = (0.4 – (-0.4))/3.55333 = 0.22514  with 10.8 df
    The t-statistic equals
    t = xbar/[s/sqrt(n)]
    and the confidence interval for the bias is given as
      xbar -/+  t(df,0.975)*[(d2/d*2)*s/sqrt(n)]
    Note: I don’t understand where the d2/d*2 comes from and never really understood the difference between d2 and d*2. If anyone has a good reference . . .
    OK to your question . . . in this case your degrees of freedom are 10.8. Since t-tables don’t have fractional degrees of freedom, the authors of the MSA ref. manual interpolate between 10df and 11df as follows:
    tM = tH – [(dfM – dflL)*(tH-tL)/(dfH-dfL)]
    Where . . .
    tM is the mid level of ttH is t(high) – the larger value of ttL is t(low) – the lower value of tdfM is the mid level of the degrees of freedom, 10.8 in this exampledfH is the larger value of df – note this corresponds to tL, not tHdfL is the lower value of df – note this corresponds to tH, not tL
    So, for this example, if you look in a t-table you will find:
    t(10df, 0.975) = 2.228t(11df, 0.975) = 2.201
    t(10.8df, 0.975) =
    tM = 2.228 – [(10.8-10)*(2.228-2.201)/(11-10)] = 2.2064

    0
    #91194

    Statman
    Member

    Doc,
    “Note: I don’t understand where the d2/d*2 comes from and never really understood the difference between d2 and d*2. If anyone has a good reference . . . ” – didn’t we recently discuss this under the string “Bias Calculations”? I believe both you and I each gave a reference.
    But you got to help me out on why on earth I would want to go through these calculations in excel and/or minitab.  Since they will both provide a traditional one sample t-test, why would I want to go through the extra steps particularly when I am losing degrees of freedom.  If you look at the example, a traditional t-test will have 14 degrees of freedom vs 10.8 for the studentized range method.  That’s a 23% reduction.  Why would I want to increase my sample size by 23% to have equevelant power?
    Am I missing something here?

    0
    #91196

    Doc
    Participant

    You are missing nothing at all. I agree 109%.
    Because those that created the MSA ref. manual are still stuck in the old days of hand calculation, they tend to use the range-based formulas where d*2’s and other mystical formulas are necessary.
    I myself feel pretty sure computers are here to stay, and most certainly would recommend use of more standard methodologies such as using the sum of squares to estimate the standard deviation. Based upon Montgomery’s SPC book, the range-based estimates are NEVER better than SS-based estimates.
    The same can be said for the average & range method of GR&R. The ANOVA-based variance component method is much better.
    By the way, I also always wondered why those who developed SPC formulas decided to use the unbiasing constants (here I’m talking about c4, not d2) for the sample standard devation when they are never used anywhere else that I know of.
     

    0
    #91199

    Gabriel
    Participant

    It is also pretty inexpensive and you can buy it on-line.
    If you ever get it, tell me. I’d love to discuss with you some things I don’t agree with.

    0
    #91200

    Gastala
    Participant

    Hi Gabriel
    I’d be interested to here what else you don’t agree with, why not post them.
    By the way have you worked through the calculation on page 88 Table 3 for the t statistic. I get 0.1253 every time, not 0.1153 as stated. This carries through to the dependent calculations.
    Glen

    0
    #91214

    Statman
    Member

    Why is your agreement 109% and not 100%?  Are you including a 1.5 sigma shift or a bias correction factor?
     
    My feeble attempt at humor.
     
    I was pretty sure that the reason was the book.  My post was more to make sure that it was clear that we are not promoting a complicated and inefficient method.  This brings me to your point about the recommendations in this MSA manual.  I don’t know why we have to take straight forward statistical methods and give them different names or different procedures when they are applied to measurement. It is my experience that you teach the concept/method once and demonstrate its application to different areas.  You are right that computational capabilities make these hand calculation methods unnecessary.  It seems to me that it would be better to demonstrate the efficient method and save the quick methods as an appendix.
     
    C4 is used when the process standard deviation is estimated using the average of the subgroup standard deviations.  The unbiased estimate is the root mean square error (a.k.a., the pooled standard deviation).  Another convenience estimate since it is easier to average the standard deviations than calculate the RMSE.   
     
    What I find interesting is that Minitab uses the average standard deviation method as a default but gives you the option to use the RMSE for an XBarS chart.  You would think that the average standard deviation would be unnecessary or at least the RMSE would be the default.
     
    Cheers,
     
    Statman

    0
    #91215

    Gastala
    Participant

    Hi Statman
    Can you clarify that point about C4. I understood that the whilst the sample variance is an unbiased estimate of the population variance the sample standard deviation is not an unbiased estimate of the population standard deviation. The appropriate correction factor is C4.
    (see for example the Introduction to Statistical Quality Control 4th Edition by Montgomery page 92)
    I didn’t think C4 was used because of the averaging of the subgroup standard deviations in calculating the limits for the S charts, but because the nature of control charts requires an unbiased estimator.
    Regards
    Glen
     

    0
    #91222

    Statman
    Member

    Glen,
     
    Yes, my answer was not very complete and I probably left the wrong impression about the use of C4.  However, I think that I was trying to say the same thing as you stated.  I don’t have Montgomery’s book so this is another one I will have to get (Glen, your going to cost me a fortune in book fees).  Anyway let me see if I can explain what I was trying to explain, and hopefully don’t cause more confusion.
     
    Let’s say we have a random variable x and x has a normal distribution.  From that normal universe, we can develop a distribution of sample standard deviations and a distribution of sample variances by taking k samples of size n.   
     
    By definition, the population standard deviation is the square root of the population variance.  It can be shown that the mean value of the distribution of sample variances converges to the population variance as g approaches infinity and is therefore an unbiased estimate of the population variance.  Therefore, the square root of the sample variance is an unbiased estimate of the population standard deviation.   
     
    When n is small however, the mean value of the distribution of sample standard deviations is not the square root of the mean value of the distribution of sample variances, nor is the standard deviation of the distribution of the sample standard deviations equal to the square root of the standard deviation of sample variances.  Therefore, the mean value (average) of sample standard deviation is a biased estimate of the population standard deviation.
     
    It can be shown that the average of the distribution of sample standard deviations from a normal distribution is C4*s where s is the population standard deviation.  And the standard deviation of this distribution is s *sqrt(1- C4**2)
     
    Therefore an unbiased estimate of the population standard deviation is C4*sbar
     
    I can’t remember the exact form of C4, but I’m sure it is in most advanced quality technology books. 
     
    But back to my original post, C4 is used when the process standard deviation is estimated using the average of the subgroup standard deviations (sbar) because it will give you an unbiased estimate.  You can alternatively get an unbiased estimate using the root mean square error (a.k.a., the pooled standard deviation).  And remember, the pooled standard deviation is not the same as the average standard deviation.
     
    Clear or worse?
     
    Cheers,
     
    Statman

    0
    #91223

    Statman
    Member

    Correction,
    Therefore an unbiased estimate of the population standard deviation is sbar/C4
    I type faster than my brain functions sometimes (and I can’t type very fast).

    0
    #91231

    Gabriel
    Participant

    Hi Glen.
    I haven’t worked through the calculations.
    My dissagreements are conceptual The reason why I didn’t post them (again) is that I did that before a couple of times without much feedback from the forum. But, upon your request, here I go again:
    – The validity of the r&R as a valid indicator of “goodness”, and the way the acceptance criteria are established: If my process shows stable and has a very good Cp/Cpk even when the measurement is affected by a poor r&R, why would the instrument be unacceptable?
    – Also in the r&R, the use of a sample of only 10 parts to estimate the total variation (which includes the process variation), when for a typical process capability study a sample of 100 or more is used. The result of the r&R (as % of the process variation) is very manipulable in this way (non robust at best).
    – The acceptance criteria for the bias: Fail to reject that it is zero? The bias will not be zero. Stop. Yet, if it is small, you may fail to reject that it is not zero, specially if the repeatability is not very good. In that way, the criterion is punishing the knowledge: If I have enough information to reject that the bias is zero, then the instrument is not acceptable, no matter how small that bias is (say taht I find a CI for the bias of [+0.01, +0.05]. If your instrument has a worse repeatability and you make the test with a smaller sample, you will be probably accepting your instrument even when the “probable” bias is much worse than mine (say that you find a CI for the bias of [-0.01 to +0.2]). You would be accepting an instrument that is probably more biased than mine, which will be rejected jus because I know my instrument better that what you know yours.
    – Exactly the same for linearity.
    – Kappa study (attribute r&R) is Ok when the characteristic is a clear yes/no: red/blue, screw present/not present, hole done/not done, etc. However, when the characteristic is in fact variable but inspected by attribute (a diameter with a go/no go gage, scratches acceptable up to this level, red betwen these color masters, etc.) there is ALLWAYS a grey zone. In this case, the result of the Kappa study is very dependant on the parts taken for the test. Take most parts out of the grey zone and the test will be passed. Take several parts within the grey zone and the test will fail. This, again, makes this test very manipulable and non robust (the same measurement system can be accepted as “perfect” and rejected as “awful”). This test does not considers that there IS a gray zone and that it should be defined an “acceptable” grey zone, within which the actual grey zone sould lay.

    0
    #91236

    Gastala
    Participant

    Hi Gabriel
    Interesting points, my thoughts would be:
    Regarding Cp/Cpk, The Gauge R&R is mainly directed at process improvement not conformance to specification. If your process has a good Cp it just means that the combined process and measurement variation is well within specification. Despite your process looking stable and being quite acceptable your measuring system may be masking special causes. If you were looking to improve your process to its optimum through a six sigma activity (despite it already being amply adequate) that might be important. If you don’t so intend then it doesn’t matter.
    Yes the 10 parts is surprisingly low. The figure of 30 seems to be tossed around, but in my view it is 50 plus before things start settling down, and the figure of 100 you quote is more like it. That’s why it surprises me they are so fussy about the subtleties of d2 and so on.
    The way I interpret the bias rule is that if there is a demonstrable amount of bias (at the 95% confidence level) you should fix it. If you can’t fix it you should adjust for it. I can live with that. However I agree with your point because you are asked to get the concurrence of the customer and that would encourage companies to avoid it by using minimum compliance.
    With the Kappa test it does draw attention to using a large number of parts representative of the spectrum, but as you say it is open to abuse.
    The bit that bemuses me is on the bottom of page 132 where it says “the team decided to go with these results since they were tired of all this analysis and these conclusions were at least justifiable since they found the table on the web”. What does that mean? it seems to say that the whole thing is a waste of time!
    The whole thing seems to be a bit of a grab-bag of ideas. For example the number of data categories formula is taken from the First Edition (1984) of the Wheeler and Lyday text “Evaluating the Measuring System”. That is pretty hard to find because the second edition was published in 1989 and most libraries threw it away. However if you go to the trouble of getting both editions you will find that this formula only appears in the first edition. Wheeler and Lyday apparently discarded it in favour of another measure which is supposed to do the same thing. So the manual is calling on a twenty year old (and outdated) book for a formula that was apparently disowned by its originators.
    As far as I can find out there isn’t a source where you can get more information. If I can’t understand something in Minitab I can get a paper from their web site that gives a full detailed mathematical explanation – I may not understand it but at least I can rest easy that somebody does!
    So I share your conceptual concerns with the manual. On the other hand it is not a standard, only recommendations. If you are going for QS9000 (TS16949) it is up to the auditors whether your compliance is adequate or not. Another factor there is that it not aligned with Minitab, so if you use Minitab for your linearity studies or attribute studies (for example) you will get different results anyway.
    Regards
    Glen
     

    0
    #91237

    Gastala
    Participant

    Hi Statman
    Better or worse? possibly worse but let’s not despair.
    I’ve got another book here called “Probability and Statistics for Engineering and the Sciences” by Devore (nothing to do with quality, control charts or the like). It says:
    “Although S**2 is unbiased for sigma**2, S is a biased estimator of sigma (its bias is small unless n is quite small). However there are other good reasons to use S as an estimator, especially when the population distribution is normal. These will become more apparent when we discuss confidence intervals and hypothesis testing in the next several chapters” (so far they haven’t).
    I assumed that C4 was used to compensate for this bias.
    I’m not clear from your email whether you are saying that (getting back to S charts):
    1)  when you calculate the standard deviations of the subgroups each and every one of those standard deviations is biased and hence the average is biased (but can be reduced by using the pooled standard deviation because that effectively increases the sample size) or,
    2) the bias is introduced by the averaging operation and can be avoided altogether by using the pooled standard deviation as an alternative to averaging.
    My assumption was that the C4 wasn’t used in most applications of the standard deviation because of those mysterious ‘other good reasons’ mentioned by Devore, but that in the case of S charts the argument went the other way because, if the standard deviation was biased, the control limits would be in the wrong place (and would have to be assymetrical about the mean to be in the right place).
    Regards
    Glen

    0
    #91240

    Doc
    Participant

    I want to clarify two points:
    1. From Montgomery’s Introduction to Statistical Quality Control, Third Edition, section 5-3.1, page 212:
    “If sigma^2 is the unknown variance of a probability distribution, then an unbiased estimator of sigma^2 is the sample variance
    S^2 = Sum(xi-xbar)^2/(n-1)
    However, the sample standard deviation S is not an unbiased estimator of sigma. If the underlying distribution is normal, then S actually estimates c4(sigma), where c4 is a constant that depends on the sample size. Furthermore, the standard deviation of S is sigma(SQRT(1-c4^2)). This information can be used to establish control charts on xbar and S.”
    The sample standard deviation is not unbiased. c4 is used to correct the bias associated with the sample standard deviation. It is not used because users average multiple standard deviations for subgroups to estimate the overall standard deviation.

    0
    #91242

    Gabriel
    Participant

    Glen, Thanks. Good feedback.
    The fact that the MSA manual is just a bunch of (not very good) recomendation is not very clear (for the auditors, at least). I wish I never listen what an auditor has to say about the measurement system that is biased and non-linear with a 95% of confidence, and that I am using together with a Xbar-R chart even when the r&R% is 100%. I don’t want to have to explain him that the MSA is just a reference and that, being the bias and linearity error only 1% of the tolerance and being the Cpk of the process I am charting larger than 4, yes, the measurement system could be improved, but I have more important things to do (even in favour of my customer).
    About the 10 parts, “surprisingly low” is too soft for me. You can reasonablily expect the standard deviation of a sample of size 10 to be as low as 0.6 of the actual population’s standard deviation and as high as 1.4 times. Which would be the r&R of the r&R method as a “system to measure the goodnes of a measurement system”?
    About Kappa, not only that it is open to abuse. If I have a tolerance of 10±0.1, the inspection method is a go/nogo gage, and the grey zones (where the part will be sometimes acepted and sometimes rejected just because of chance) are 9.9±0.01 and 10.1±0.01, it is pretty probable that the fraction of parts that will fall in the grey zone will be small enough to pass the kappa test, never mind how large your sample is and never mind wether these grey zones are acceptable or not for your process/product. Further more, if the process has a fair capability then the sample most probably will not contain a part very close to the grey zones (they will be well within specification), so you will find not a good but a perfect agreement (in the 10±0.3 range, for example).

    0
    #91251

    Ron
    Member

    This is a great example of the difference between six sigma methodoglogy and classical statistics which have been around for a very long time.
    What are you attempting to discover with the T value? Whether or not your hypothesis is true or false correct?
    Simply take the nearest approximation you have and compare it to your calulated value. If it is close then perhaps you need to get mopre sophisticated. If it is not make the proper conclusions and proceed.
    If all you were interested in was a methodological discussion of stuff that doesn’t matter….I apologize for jumping in the discussion.

    0
    #91260

    Gabriel
    Participant

    Let me give this a try:
    Sbar is ok for control limits calculation. sqrt(S^2bar) and Sbar/c4 are ok for Cp/Cpk calculations. (also, Stot, the sample standard deviation of all the parts from al the subgroups all together, is ok for Pp/Ppk calculations, but let’s leave that aside by now). Why? Because  Sbar is an umbiased estimator of the average of the sample standard deviations, while sqrt(S^2bar) and Sbar/c4 are both unbiased estimators of the population sigma. Why? Read if you have time to.
    We have a population of values x with some distribution (never mind which one by now). This distribution has an average µ|x, standard deviation sigma|x, and variace sigma|x^2.
    Now we take samples of size n from this population. Each sample has a sample standard deviation S, and variance S^2. So now we have a new population: the population of values S which is the population of sample standard deviations of the samples of size n taken form the population of x. This new population will have an average: µ|S and standard deviation sigma|S.
    Now I can take a sample of k samples of size n, from which I will obtain k values of S. Being the sample average an umbiased estimator of the population average, Sbar (the average of the satndard deviations of the k samples) is an umbiased estimator of µ|S.
    In a control chart you want to monitor the evoution of a parameter. If this parameter is S, then Sbar, as an umbiased estimator of µ|S, is a valid central control limit, from which you apply an estimator of µ|S±3sigma|S to find the upper and lower control limits.
    Now, if you are not interested in the control chart but in the process capability, then you are interested in sigma|x. S is not an umbiased estimator of sigma|x. But S^2 is an umbiased estimator of sigma|x^2.
    Think of yet another population of the values of S^2 (the variances of samples of size n taken from the population of x). This population will have an average µ|S^2, from which S^2bar (the average of the k variances of the k samples, do not confuse with Sbar^2, which would be the square of the average of the k standard deviations) will be an umbiased estimator. It can be shown (I think) that µ|S^2=sigma|x^2 (exactly equal, not an estimator). Then S^2bar will be an umbiased estimator of sigma|x^2. Since sigma=sqrt(sigma^2), then sqrt(S^2bar) is an umbiased estimator of sigma.
    Now, it is evident that, given a constant distribution shape, the average of the sample standard deviations µ|S MUST be proportional to the population’s standard deviation sigma|x. Well, maybe it is not so evident. Think of a population of diameters which are measured in mm. This population will have an average and a standard deviation. Now I take samples of size n, and each sample has a standard deviation. Now I transform it to inches, so everything multiplies by 25.4, where everything includes the individual values, tha average, the distance from each value to the average (hence the population standard deviation), tha sample average, the distance from each individual in the sample to the sample average (hence the sample standard deviation) and, ence, the average of the sample standard deviations. That said, now it is evident that it MUST be µ|S=a*sigma|x, of course as long as we keep always the same shape of the distribution and the same sample size. To say it in other words, “a” is a function of the distribution shape and the sample size n. Let’s call c4=a when the distribution is normal. And that’s it. Now we fixed the shape, c4 is a function of n only. Then, for a normal distribution, µ|S=c4*sigma|x or, if you prefer, sigma|x=µ|S/c4 (again, exactly equal, not an estimation). Finally, being Sbar an unbiased estimation of µ|S, then Sbar/c4 is an umbiased estimator of sigma|x. Not sure, but I guess that sqrt(S^2bar) is more eficient than Sbar/c4.
    Ufffff. My fingers burn…

    0
    #91265

    Doc
    Participant

    Ahhh, but if the person doing the test happens to be an automotive supplier, they are forced, to use the AIAG MSA Third Edition methodology.
    Failure to do so without express written permission from the customer can result in loss of QS-9000, or more recently, TS16949 certification. Loss of these certification means the company can’t bid for new business with the certification subscribers (such as Ford, GM, Chrysler, . . . ).  For most automotive suppliers that means going out of business.
    In the automotive industry, at least for MSA, methodology is critical. 

    0
    #91275

    Statman
    Member

    Doc,
     
    There was no need to quote me from Montgomery as my previous post said the exact same thing.  In fact, I provided the justification for what Montgomery wrote based on the mean value of the distributions of sample variances and sample standard deviations.
     
    I never said that the sample standard deviation is an unbiased estimator of sigma.  I said that the square root of the mean value of the distribution of sample variances is an unbiased estimator of sigma (the population standard deviation). This is because the mean value of the distribution of sample variances is an unbiased estimate of sigma**2
     
    I also said that the mean value of the distribution of sample standard deviations is not the square root of the mean value of the distribution of sample variances.  Therefore, the mean value of the distribution of sample standard deviations is a biased estimate of sigma.
     
    Do you understand what is meant by “Mean value of the distribution”? 
     
    When we calculate the average of subgroup standard deviations, we are calculating the mean value of the sample standard deviations. Why do you think we call it a “SBar” chart?  So, yes, the use of C4 is because “users average multiple standard deviations for subgroups to estimate the overall standard deviation”.  If we used an S**2 chart and developed the square root of the average of the sample variances we would have an unbiased estimate (ie the pooled standard deviation). 
     
    Sbar charts, when we are calculating the process sigma, is the only place that I can think of that we would run into the situation where we are averaging standard deviations to estimate the population standard deviation.  Most other uses of the sample standard deviation are when we want to compute probabilities (hypothesis tests, confidence intervals, etc).  It makes no difference whether we work with the sample variance or the sample standard deviation as they will yield the same probabilities since the sampling fluctuations (sampling error) will be equivalent.

    0
    #91276

    Statman
    Member

    “This is a great example of the difference between six sigma methodology and classical statistics which have been around for a very long time.”
     
    Patnaik published his work “The use of Mean Range as an Estimator of Variance in Statistical Tests” in Biometrika in 1950 and it has been used every since.
     
    How do you define classical statistics?

    0
    #91277

    Statman
    Member

    Glen,
     
    What I am saying is sort of your number 2.
     
    2) The bias is introduced by the averaging operation and can be avoided altogether by using the pooled standard deviation as an alternative to averaging.
     
    But I would add this to it:
     
    2) The bias is introduced (when we estimate the process sigma) by the averaging of (the subgroup standard deviations) operation and can be avoided altogether by using the pooled standard deviation as an alternative to averaging (unbiased estimate of the process sigma).
     
    It has nothing to do with an increase in sample size. 
     
    As for those mysterious ‘other good reasons’ mentioned by Devore, most other uses of the sample standard deviation are when we want to compute probabilities (hypothesis tests, confidence intervals, etc).  It makes no difference whether we work with the sample variance or the sample standard deviation as they will yield the same probabilities since the sampling fluctuations (or sampling error) will be equivalent.
     
    And this is the most I have thought about C4 since I had to derive it in graduate school XX years ago.
     
    Highest Regards,
     
    Statman

    0
Viewing 24 posts - 1 through 24 (of 24 total)

The forum ‘General’ is closed to new topics and replies.