Thanks all for your input.
I did open the macro for MINITAB Anom and quickly found that the H statistic calculation looks pretty cumbersome…
I will try to get to the H table values you described.
One thing I found interesting is that for random generated normal data with 10 pt subgroups, Anom seemed to have slightly tighter decision limits…[Read more]
I have used 1 way anova in Minitab and used one of the optional comparisons methods for the groups. This provides all the combinations of group comparisons, comparisons with the “control” group or with the “best ofhte rest” (maybe what you are after).
I agree that normality tests are a proper first step before jumping right into parametric (normal) hypothesis tests. If data is normal, great. If not, proceed with caution and/or use non-parametric tests (less powerful).
If data is not normal, this is often critical. It can highlight special causes which ARE the problem (skewness, outliers, tw…[Read more]
Yeah, I did not know much about relaibility analysis going into my MBB role. If you have Minitab and 100 sets of data, try distribution fitting in Minitab first: stat>reliability/survival>. You can also do some on or two factor regressions to see what affects MTBF.
Dont assume Weibull pdf although it often give the best fit of your d…[Read more]
I dont think you are cheating – just using some engineering judgement!
If you think the full top cycling does affect the sensors accuracy/stability, then you may want to do the full cycling to a least understand this. Maybe run chart 20-30 full cycle measurements.
If you are just interested in sensor repeatability, then take the short cut…[Read more]
Your gage R&R results can be used to help with capability studies and DOE analysis.
For capability studies, you can determine capability indices based on total observed variation. Knowing the measurement SD, you should be able to estimate the process SD. You could then do some “what if” analysis. What would hte process capability be at im…[Read more]
Minitab website has a macro catalog wiht one that draws normal curves for you. You have to go to website, download macro to your macro folder and run it using the command line editor. Kind of a pain.
I agree with Gabriel on all of his points. The centering and small variation (SD) is amazingly low…
If you do calcualte Cp or Cpks, be careful in quoting them directly when only 8 measurements used. In your case, the Cpk might be estimated at ~499 with a lower 95% confidence at only 280. If you come to find the real SD is 0.100 not 0.001, th…[Read more]
I agree – too many personal attacks and not enough discussion about shift!
My customer asks for Cpk and Ppk information on certain property of lots of material we send them through the year. The Ppk tends to be lower than Cpk by 0.2-0.6 depending on the quarter I send him the data. Sounds like a 0.6-1.8 Zshift to me.
Its pretty ob…[Read more]
Your quiestion is timely as I will post a similar question today.
In your time series (with seasonality and dependance or autocorrelation?), you may wish to try the following things:
Compare current month to 12 months ago and do this for last 12 months. This is 12 point data set of differences year over year with seasonality…[Read more]
I agree with earlier posts – First, do Gage R&R and see how much measurement variation is impacting your current Cpk and use this stdev for planning your DOE (# of replicates needed to “see” effects).
If you are planning on having the ability to see significant effects on stdev from the DOE factors, you will generally need more…[Read more]
Yeah, I know that the first example (mean = 545.0, SDprocess = 0.00) was probably does not really happen.
I think that the percentages should be close for both first and second cases (SDproc = 0.0 up to 2.0).
If you or anyone has direct formulas for calculating alpha/beta risk as function of process mean, SDproc and SDmeas, these would…[Read more]
I think it depends on what you mean by “EV” and “doubtful”. I will assume you define EV as 6 x SDmeasurement and doubtful as either producer or conusmer error (alpha and beta errors) being say >0.1%. If these are OK to assume, then the following example may help:
If USL is 550, LSL is 500, process mean is 545, SDprocess is 0.0, S…[Read more]
1. I would add 352 (gets all values >0) to each value then do appropriate transformation to get it Normal.
2. Run a capability analysis anyhow. Read the obseved % or DPMO values out of spec limits. Get Z value(s) from Z table. Example. 2.0% out of spec high, 1.0% out of spec low. Z table gives Z = 1.88 at 3%.
3. Try a probablity…[Read more]
Roberts thoughts on graphical analysis of the data are good and should give you quick info on the major thickness varaiton sources are.
I have the same situation with a film production line. Some things I have done:
1. Try to charachterize thickness varation overall and in MD and CD components. We took all “filtering” out for this p…[Read more]
1. Try a fractional factorial (4 runs) design avinding the “bad” settings in the four runs. If you can only afford 8 total runs, make 2 replicates. This can give you info on main effects and potentially some on an interaction (with planning).
2. You may want to consider using a simplex optimization method of testing. Although this metho…[Read more]
I have some experience with thin materials (0.0003-0.008 inch) metrology.
Contact methods can be tricky as stated. Pressure, contact tip area, deformation, surface contaminants and alignment have been problems. Difficult to find “master” parts at these thicknesses.
If average thickness is critical, consider weighing a known area of hte label a…[Read more]
As Mike stated, there are options for comparing means/medians of two groups which are not each Normally distributed.
Another quick manual test is the TUKEY end count test. Dot plot the two groups. Count the points from the “low” group that are lower than the lowest value from the “high” group. Count the points from the “high” group that are…[Read more]
- Load More