iSixSigma

Chuck White

Activity

  • @Distocont the value of a rating scale depends on your reason for collecting the data. If you are only trying to understand what your reject rate is, then a rating scale has zero value. However, if you want to use the data to improve the process, then a rating scale in many cases can have a lot of value, provided you can rate the severity of the…[Read more]

  • Minitab 18 has the same functionally as 19 for both regression and ANOVA — the output may look a little different, but all of the content should be there.

    To Robert’s point, just make sure you always put categorical predictors in the correct window — if they are text values, Minitab will let you select them as categorical predictors without…[Read more]

  • @rbutler, Minitab doesn’t explicitly explain the process or refer to “dummy variables” in the output, but it shows the coefficients for k-1 levels for each categorical predictor (the first level of course is the reference, included in the regression constant). It also displays a separate regression equation for each combination of categorical…[Read more]

  • @rbutler is correct, but a lot of modern statistical analysis software will do all of that for you. I am currently using Minitab 19, and its dialog box for regression includes inputs for both continuous and categorical variables. For the categorical variables Minitab does just what Robert described, but it’s all in the background. You can also use…[Read more]

  • For my last week of Black Belt training, I assign teams a project to get a balsa wood airplane to fly a specific distance (with a tolerance of course), but also to stop within a set distance! (in other words they have a short runway to land on.) Each team gets two different sized balsa wood airplane kits, and they are permitted to mix and match…[Read more]

  • Type II error is fuzzier than Type I error. To quantify beta risk (and power), you have specify the difference of interest (how big a difference needs to be to have practical importance). You can never be confident that there is zero difference without testing the entire population, but power and sample size calculations give you the probability…[Read more]

  • @BayanKamal, it sounds like you have a basic (but common) misunderstanding of how hypothesis testing works. A hypothesis test is similar to a proof by contradiction. In order to prove something logically or mathematically, a common approach is to assume the opposite and then demonstrate that this assumption results in a contradiction.

    In a…[Read more]

  • I would say only D and E are correct.

    I agree with @rbutler about B — the only way low cutting speed results in a higher tool age is if there is an interaction, but since they didn’t provide an interaction plot, it would be impossible to conclude that.

    My guess on C is that the 2 is supposed to be a superscript (3 to the 2nd power), but that…[Read more]

  • I agree with @rbutler — if the process average is actually 1.0 and the standard deviation is 0.01, then the probability is close enough to zero to not worry about it.

    But if you want to calculate it out, I do need to clarify something else. Are you asking the probability of an n=100 sample average equal to 0.95 or less than 0.95? The probability…[Read more]

  • I have to disagree with the idea of using a quadratic model by default. Adding terms to a regression model will always increase the R-square, and therefore make the model fit the data better. But a higher R-square doesn’t mean the predictive power of the model is any better. In fact, any terms that are just fitting the noise of the data set can…[Read more]

  • Minitab also has an Expanded Gage R&R that allows one or more additional factors. In your case it sounds like the suggestion from @schtipp is workable, but if you have a case where you want to include both operators and gages, you can use Gage R&R Study (Expanded) and put the gage ID in as an additional factor. 2 years, 4 months ago

  • I agree with grazman — if you can track the actual times each shipment left your dock or arrived at your customer’s dock (depending on who is responsible for freight), you can get a lot more information with less data than %OTD. You would set each shipment’s target as zero, and record minutes before (-) or after (+) the target.

    If you don’t have…[Read more]

  • @mayank_vj how are you measuring productivity? If your operational definition of unproductive is based on some kind of quantitative measure (e.g., sales calls per week, hit rate, etc.), you will almost always be better off analyzing that quantitative data directly rather than converting it to binomial data. 2 years, 8 months ago

  • Good discussion, but I have to disagree with @Mike-Carnell on a couple of points. The first is that % Tolerance is a better measure than % Study Variation. They tell you two completely different things, and which one is more relevant depends on the reason for doing the gage study.

    % Tolerance tells you how well you can distinguish good parts from…[Read more]

  • I agree with @cseider — I have never encountered a practical application where a Z test was appropriate. If you are dealing with real world data, I would use the 2 sample t test. 2 years, 9 months ago

  • Load More