iSixSigma

Chuck White

Activity

  • I would definitely take that warning as a red flag. It basically means that if your data had included any measurements at the spec limits, the current transformation function could not have been used. Since I would expect measurements at spec limits to be possible, I wouldn’t put any trust in this transformation.

    One possible cause for the error[Read more]

  • Sorry to jump in late on the middle portion of this thread. In my thinking, the zero defects controversy boils down to the distinction between a target and a goal. Zero defects should always be the target (whether explicit or not), but it isn’t a reasonable goal. To use a design characteristic on a part as an analogy, the target is the…[Read more]

  • @JohnWick the method Mike referenced is often called Signal Averaging. It relies on the Central Limit Theorem which says that the standard deviation of sample averages is smaller than the standard deviation of individual measurements by a factor of the square root of the sample size. Therefore the distribution of averages of 4 measurements will…[Read more]

  • Once the data is in Minitab, splitting it out into columns will be very difficult — Minitab does not have a text to columns feature like Excel. However, if you are importing the data from a text file, you can specify the field delimiter when you import. The default delimiter is a Tab, but in your example it looks like selecting a Space delimiter…[Read more]

  • I agree with Chris — you want to capture as many of the possible variation sources as you can. For a snap shot capability study as you described, you should also use Pp/Ppk instead of Cp/Cpk. Cp and Cpk are only valid after you have established stability over time as evidenced by control charts. 3 years, 1 month ago

  • @rbutler — Sorry, I misinterpreted your suggestion as a novel approach to transform a factor with 4 levels into multiple factors with 2 levels since Leo said he wants a screening design. You may not be familiar with Mintab software, but there is no need for the user to specify dummy (indicator) variables as you described. Minitab handles that in…[Read more]

  • I recommend choosing a common data point along the length. If you use an average, your study will underestimate the part to part variation, since the variance of sample averages is always less than the variance of individual measurements. 3 years, 2 months ago

  • Dummy variables work fine for regression analysis, but are trickier for factorial DOEs. The proposed scheme wouldn’t work here since not all combinations are possible (for example, what needle position would correspond to D1=1, D2=1, D3=1 ?).

    I can think of 3 options for handling this situation:

    1. You can use a two dummy variable scheme where:…[Read more]

  • Thanks for the article. I agree that the assumptions for nonparametric tests are often overlooked. I have a quiz I give my Black Belt students after covering nonparametric tests. Each question describes a […]

  • Strayer is correct — Z-score is just a unit conversion, often used in interim steps of capability calculations, but not a measure of capability in itself. However, I think you might be confusing the terms Z-score and Z-bench. Z-bench is the calculation of the Sigma level.

    Getting to the root of your question, the main difference between the 69%…[Read more]

  • Unfortunately there isn’t anything else you can do to improve the precision of your estimates, but you can at least quantify it with confidence intervals. Here are a couple of articles on calculating confidence intervals for…[Read more]

  • Standard error is simply the sample standard deviation divided by the square root of the sample size. To reduce the standard error, all you have to do is increase the sample size. 3 years, 3 months ago

  • @regnells for your application, I believe the Acceptance Sampling tools will be a more direct way to get the sample sizes you need. They can be found in the Quality Tools sub menu of the Stat menu.

    For a validation test plan, you just assume an infinite lot size, so leave that blank in the dialog box. Set the AQL to an arbitrarily low number (.1…[Read more]

  • @rbutler, you may be right. The reason I thought a paired t test might apply is because @Marson stated that he calculated the percent change in a 3rd column. Since he referred to a column and not a cell, it seems plausible that he has a separate calculation for each row of data, which would suggest pairs.

    Either way, some clarification would…[Read more]

  • Again, if you just want to test the percent change in the overall average between two groups, your original t test did that. The only difference between the measured change and the percent change is the units.

    If you want to test the average percent change between individual pairs, you can use a Paired t test (or a 1 sample t on the calculated…[Read more]

  • I’m not sure if you are confusing statistical significance with practical significance, or if you are looking for a paired t test.

    Statistical significance means that the difference you observed between your sample groups is too big to be reasonably explained by sampling error (chance). Based on your t test, the difference is statistically s…[Read more]

  • Yes, you are right. Minitab stores the model information for responses in the data worksheet (in the background, not in the table cells), so both responses have to be in the same worksheet, as part of the same DOE, to analyze them together with the Response Optimizer.

    Assuming the run order is the same in both of your worksheets, you should be…[Read more]

  • David, your early suspicion was correct: the Minitab Assistant tied your hands. For multiple responses, you will want to use the more powerful functions available in the Stat > DOE menu. You won’t need to start from scratch, but you may need to define your DOE before you can analyze it (Stat > DOE > Factorial > Define Custom Factorial Design).

    As…[Read more]

  • I would add one more complication to your study. In your description you said that all of the participants took the standardized test first, and then took your own custom test two months later. That means any change over time would be confounded with the difference between the tests. In other words, if the participants either learned more or…[Read more]

  • Chuck White changed their profile picture 5 years, 5 months ago

  • Load More