• I am not sure if the first part of your post was a question or not.  If you want to know how NDC is calculated:
    (Std Dev for parts/Std dev measurement)*1.41

  • I nominate Walter Shewhart.

  • I would do some diagnostics (they are quick) before I trashed the ANOVA results.  If you jump to nonparametrics you might give up some ability to detect significant differences in the means.
    Which normality test yielded a significant result?  What was your sample size?  Did you look at deleted t or standardized residuals?  It is possible that a s…[Read more]

  • First I agree with Darth on converting it to continuous. 
    If I understand your problem you have more than 7 columns but 7 or less rows.
    If that is the case you should be able to switch the columns to rows and the rows to columns.  Minitab help says that the number of rows is limited by ‘space available for storage’.  If that doesn’t work, the ch…[Read more]

  • BTDT,
    Thanks I will use this in a week two in a couple of weeks.  I am a former ‘nerd’ with a bunch of D20s in box in the corner of my closet.

  • See this:
    It doesn’t account for the differences between the months / geographic locations.  It just let’s you estimate the ‘average variation’ of the different months / geographic regions.

  • Andy / Kirk / Robert,
    If narayan had said that the distribution of sampling means will ‘tend’ towards normality as n increases would you then agree?  Would you agree if he added something like ‘for a parent distribution that has a finite mean and variance?’  This comes from Robert’s linked post:
    “Although the central limit theorem is concerned w…[Read more]

  • No estoy seguro que entiendo la pregunta pero, la prueba de Anderson Darling para normalidad deberia servir bien con tamano de muestra de 523 observaciones.  He escuchado que puede ser problema con mas que mil datos.

  • I might have jumped the gun in my previous post.  I’m working on a similar project with a BB in our company and I think I was inside that ‘box.’
    Can you separate (with meters) after the utilities get inside your plant how much goes where?  How much goes to heat the building, how much goes to which process machines, etc.?  If not, can you get it…[Read more]

  • There are two parts:
    1) Business level – divide units of energy used by some reasonable measure of production level (units produces maybe).
    2) Ambient conditions – this might be tougher.  Can you deseasonalize past useage after doing step 1 from above?

  • Make one or more characteristics of the annealing process factor(s).  Can you fix and control levels for these factor(s)?

  • faceman replied to the topic 3-Parameter Weibull in the forum General 15 years, 2 months ago

    The three parameter Weibull is the same as the two except it has a 3rd parameter (Threshold – symbol mu).  No defects happen before mu.  In a two parameter the defects can starts near 0.

  • faceman replied to the topic Mode in the forum General 15 years, 2 months ago

    If you have categorical data use Tables > Cross Tabulation or Stats > Basic Statistics > Display Descriptive statistics.  The category with the highest count is your ‘mode’.  I am not sure that it is called a mode in formal statistics if it is categorical data but this would find the ‘most frequent value’ therefore it is conceptually the mode a…[Read more]

  • I’m not sure that I understand the question.
    By using ‘prevail’ are you asking which of the two is relevant?   You don’t have to pick between the two.  The ‘real’ model might have both significant 2 way interaction(s) and quadrature (significant centerpoints).  Therefore, it could be reasonable to augment your design in order to adequately mode…[Read more]

  • Fer,
    I would definitely trust the Fisher’s exact method over the approximate method.  If I remeber you had one treatment that an observed value of 1.  That probably doesn’t approximate well.  The exact method (hypergeometric) employed by Fisher’s is better in your case.

  • Fer,
    How did you calculate the confidence limits?  You are right to question a low p value when you have overlapping confidence limits.  I would suspect that the method you used to calculate the confidence limits is less appropriate than Fisher’s exact test.
    In general, I would trust Fisher’s exact.  You have fairly small sample size relative to…[Read more]

  • Chris,
    I don’t get why the p drops below 1.0 in the case of sampling without replacement.  Can you explain?  I understand the dependency issue but with only two colors I don’t see how I won’t have at least two that match.
    Thanks in advance.

  • I would say yes, you do.  At least report the Z value…  As this quantifies your defect rate baseline.  Hard to get an estimation of the defect rate from a picture alone… sample size is a huge factor that is hard to see in a histogram.
    For this part you could get the parameters of the lognormal / exponential distribution that characterizes yo…[Read more]

  • Good question.  Hopefully you’ll get lots of responses.  Here’s mine:
    RSq (higher is better)
    MSE (Lower is better)
    Residuals – patterns, in the resids vs fit should be understood / resolved
    Do a logical check against content expertise (if you violated some of the assumptions, you might get some garbage out).
    Variance Inflation w…[Read more]

  • Load More