Forum Replies Created
Forum Replies Created

AuthorPosts

July 13, 2008 at 1:50 pm #173791
Stan, if you cannot reply to posts in a professional manner perhaps you should not reply at all. I visit this forum every couple of months and am sick and tired or reading your rude and abusive nonsense. Why don’t you get a life and leave this forum to people who have a genuine interest in Six Sigma.
0May 21, 2008 at 2:40 pm #172133Could you please send me a copy of the funny lean vid.
Thank you very much,
[email protected]0April 10, 2008 at 1:55 pm #170944It is possible that the time is related to the size (value) of the debt.
So you might want to plot these variables against each other initially, then stratify the data.
There may be different root causes associated with different subsets.
There may be one subset that gives the biggest hit.
Nothing profound here but hopefully it helps0March 14, 2008 at 12:59 pm #169675I ‘ve just realised the three examples I gave could all be the same persons life
0February 27, 2008 at 11:44 am #169033February 27, 2008 at 9:54 am #169031Is it technically possible to tell, once a defect is found at stage 8, what stage it originated? Or can any stage create the same defect?
Could some brainstorming and a simple ‘fishbone diagram’ identify the possible causes? they are what you then try to control.0February 25, 2008 at 5:01 pm #168969If the consequenses are so serious, I do not think your objective should be to minimise the audit time, it should be to minimise the frequency and/or cost of the errors.
How often do the auditors find an error just now? What types of error? What are the root causes of these errors?
Also, is the existing audit process 100% effective, or are there still some errors after audit?
This may lead you to a better defined objective.0February 25, 2008 at 2:56 pm #168960What is the source of the formula
Min. Sample Size with 95% confidence level and precision 5% = (1.96/0.05)^2*p(1p)0February 11, 2008 at 4:19 pm #168433I was taught that “inspections are not value add”. However, on reflection I believe that inspections to check work has been done right must be different from inspections to accurately assess what work needs doing (which is my assumption about aviation inspections.)
0February 6, 2008 at 2:41 pm #168257Disregard last post – wrong topic!
0February 6, 2008 at 2:36 pm #168256I’m no expert but have you considered non parametric tests – e.g. MannWhitney. This basically compares the cumulative distributions with each other – regardless of their fit to any standard distribution.
0February 5, 2008 at 9:42 am #168177Your proposal implies an increase in some types of defect could be hidden by a decrease in the most heavily weighted defects.
Avoid composite metrics like this. You will never get agreement on the weightings.
You have several types of defect – measure and investigate and reduce them seperately.0February 5, 2008 at 9:03 am #168176What do you do with the information you get from these scales?
Would it matter if it was wrong – if not, why are you weighing anyway?
If it is important in some way, maybe better check the scales now and again.
0February 4, 2008 at 1:12 pm #168141Good advice. I suspect my problem will be either to
calm down the optimists who want to declare victory after one month of 40 v 30
or
ask for patience from the doom mongers if the first month turns out 36 v 340February 4, 2008 at 9:21 am #168126How many enquiries are there that did not result in a contract? Is that a big enough sample to do anything with?
0February 1, 2008 at 1:32 pm #168074Jan,
Thanks again – I think I now have this clear in my head.
Glen0February 1, 2008 at 1:01 pm #168072Thanks Jan
For 40 machines x 8 months I have
Power and Sample Size
2Sample t Test
Testing mean 1 = mean 2 (versus <)
Calculating power for mean 1 = mean 2 + difference
Alpha = 0.05 Sigma = 1.3
Sample
Difference Size Power
0.25 320 0.7838
0.17 320 0.5030
So, quoting Minitab
The power of a test is the probability of correctly rejecting H0 when it is false. In other words, power is the likelihood that you will identify a significant difference (effect) when one exists. © All Rights Reserved. 2000 Minitab, Inc.
The null hypothesis is that there is no change.
If the modification does in fact achieve 0.25 we have 78% probability of correctly concluding this but if the result of the modification is say only 0.17 we have 50% probability of drawing correct conclusion?
0January 28, 2008 at 10:36 am #167835Surely ‘approaches zero’ and ‘nearly equivalent’ are only meaningful when compared to some standard of precision.
0January 27, 2008 at 10:53 am #167796If you do a search on this forum you will find that it has been discussed extensively over the years and there are any number of views. The upshot seems to be that it is a reasonable rule of thumb where the accuracy is sufficient for most purposes. It is also where you can start to use the ztest instead of the ttest.
From my experience I believe it is too small, I prefer to have at least 50. That may be a personal prejudice or it may be that quality standards and accuracy expectations have increased since gossett’s day.0January 27, 2008 at 1:18 am #167789Top management are concerned with strategic issues and in many cases regard process improvement as a tactical issue. They see SAP as a strategic investment.
Where Six Sigma has been successful it has been sold to top management as a strategic issue, most notably of course at Motorola and GE.
It does not necessarily follow that top management are shortsighted, in many companies cost reduction and quality improvements are not strategically important as long as they are broadly in line with the competition. They are strategically important in industries where competitors have converged and are jostling for position e.g. automotive, and those industries are right on it.
In many companies cost and quality are strategically important, but management haven’t realized yet; those are ripe for Six Sigma.
Middle managers in many companies are very tightly constrained. They may have apparently enormous power and enviable budgets, but very little discretion. If process improvement is not in their budget they can’t spend a hundred dollars on it. That’s why you need top management support.0January 25, 2008 at 12:13 pm #167709When used properly ISO9001 and Six Sigma are complementary. Six Sigma makes the breakthrough improvements, ISO9001 locks them in and ensures that there is continuous improvement. Six Sigma can tackle the big issues systematically knowing that there is appropriate corrective and preventative action in the other areas, and that when a Six Sigma project is complete the improvements will be locked in.
If you have “limited engagement” maybe you should review your choice of certification company. Para 5.1 of ISO9001 states “Top management shall provide evidence of its commitment to the development and implementation of the quality management system and continually improving its effectiveness by …….”. If management is not so committed then you should not be certified.
Six Sigma can be brought into the ambit of ISO9001 through 8.2.1 “The organization shall monitor information relating to customer perception as to whether the organization has met customer requirements. The methods for obtaining and using this information shall be determined ….”, again there are other paras that give you opportunities for requiring the use of Six Sigma 8.5.3. “The organization shall determine action to eliminate the causes of potential nonconformities in order to prevent their occurrence. Preventative actions shall be appropriate to the effects of the potential problems …….”.0January 2, 2008 at 5:36 am #166683The Quality Council of Indiana produces the most comprehensive study material:
http://www.qualitycouncil.com/cm.asp0December 14, 2007 at 11:27 am #166143The original statement was that it shouldn’t be used, and most postings agree with that on the grounds that SPC is better and makes acceptance sampling redundant.If I understand you right, Mood’s theorem is going one step further and saying that acceptance sampling doesn’t work if a process is in control. Acceptance Sampling has been used since Moses was a lad, if it didn’t work surely somebody would have noticed before now.
0December 14, 2007 at 2:11 am #166127I know I’m replying to an old post, but scanning the responses I couldn’t see any mention of the Buckingham Game.It uses various colors of lego brick. I’ve used it and it went well, it’s fun and gets people involved.
0December 14, 2007 at 1:29 am #166126I would have thought Six Sigma was very applicable to heart surgery. Not so much in the skill of the surgeon but in things like making sure instruments aren’t left inside the patient, that everything is sterilized properly, that vital equipment is available and reliable etc.
0December 11, 2007 at 8:19 am #165978The 3 Parameter Weibull is fine, but Minitab finds a better one with the Johnson Transform if you want to be fussy (PValue 0.049 cf 0.025).
0December 5, 2007 at 3:54 am #165730The snag with using the number of days directly is that their distribution will follow a Poisson distribution rather than a Normal Distribution. It would be possible to calculate control limits but they would be ‘very asymmetric’ (Montgomery). This can be overcome by transforming the days between events so the distribution is near enough normal, and then using the ImR chart.
Montgomery suggest a transform y^1/3.6, a posting here suggested y^1/2 and another suggestion is to convert to an annual rate (that is 1/y scaled for convenience).
Another alternative is to use a g chart, which is a type of attribute chart. It plots the proportion of ‘events’ to ‘instances’, an instance can be a ‘day’.
That’s what I’ve gathered so far. I’ve got some real data sets and I can generate random variables in Excel. When I get the chance I’ll simulate all the solutions and see what happens.0December 5, 2007 at 2:39 am #165726The transform y^1/3.6 was from Introduction to Statistical Control by Montgomery. The idea of converting to rates was from Implementing Six Sigma by Breyfogle.
Somebody sent me a paper on g and h charts, by Benneyan, from the Journal of Health Care Management Science. It looks as though the g chart would also be suitable. I haven’t come across t charts.0December 3, 2007 at 11:21 pm #165667Thanks. I’ll simulate all three methods that I’ve found so far, the rate, the square root and the power 1/3.6 and see what happens.
My query was also prompted by a student, he works in healthcare where there are many potential uses for this type of chart.0November 12, 2007 at 11:50 am #164632Quoting from the text by Forrest Breyfogle III “AIAG (1995b) defines the following “For attribute charts capability is defined as the average proportion or rate of nonconformimg product…..”, and further that “if desired this can be expressed as the proportion conforming to specification (i.e., 1 – pbar)”
0November 4, 2007 at 2:40 am #61772The sample size you are using to calculate the average magnitude of overpayment is only 16, which is very small. The rule of thumb is a minimum sample size of 30, and preferably 50+.
It is also be important to look carefully at the data. For example, if you found there was one overpayment of $200 and the remaining 15 totaled to the balance of $19.95 then the extrapolation would not be valid.
In this type of situation you often find there are many minor errors and a few large ones but nothing much in between; there is a mix of two (or more) distributions. Your sample would need to capture at least 30 from each distribution to be valid.0March 3, 2007 at 8:53 am #152692I’ll take one from Ranjit Roy’s book, molding a nylon grip:Control factors (5 using an L8 array):
cure time
cooling rate
cooling air flow
additive
mold temperatureNoise factors (3 using an L4 array)
nylon condition
ambient temperature
humidityThe objective is to find the combination of control factors that will minimize shrinkage and is least affected by variations in the noise factors.So that’s 32 runs. I guess the question arises, is it better, or why is it better, than putting the 8 factors into an L32? Sound of penny dropping!The real reason for distinguishing between control and noise factors is that only the factors in the inner array are used in the S/N ratio calculations.0March 3, 2007 at 5:50 am #152690Taguchi divided factors into ‘controllable factors’ and ‘noise factors’; both types are ‘factors’.In classical design there is no such distinction. There are only ‘factors’.Therefore if you use a classical design in place of a Taguchi design you would ignore the distinction between controllable and noise factors.In classical design ‘noise’ refers to the ‘common cause’ variation in an experiment. For the analysis to be valid (in classical design) this noise must conform to a normal distribution. That means that it is the aggregate of the contribution from many small factors, none of which swamp the others.Because of this it is important that you randomise the runs and carry out residual analysis. That would show up any ‘lurking variables’; among other potential causes of nonnormality.If time or material constraints prevent you carrying out complete randomisation, you can use blocks; which is what I gather you did.
0March 2, 2007 at 11:05 am #152649You could put it into the inner array, but you would probably end up having to carry out more runs.You select a Taguchi orthogonal array for the inner array that will allow you to analyse the effects of the inner array factors, and any selected two way interactions, with the minimum number of runs. You are worried about confounding.You select an outer array that is orthogonal, but you don’t need to worry about analyzing the effects of the noise factors. You just want to balance them out so you can select optimal levels for the inner array factors. You don’t care about confounding.With classical design you rely on randomization, and the central limit theorem, to balance out the noise effects and convert them into normally distributed ‘noise’. With Taguchi you try to do it deliberately, identifying the specific noise factors and distributing them so they balance out.That’s why I claim that Taguchi is more applicable to the design phase. If you are analyzing an existing process you can rarely be confident that you can identify all the two way interactions, and noise factors. Not to mention the practical problem highlighted earlier in this thread of controlling the noise factors with sufficient accuracy.
0March 2, 2007 at 6:55 am #152641Taguchi seems to work best in the design phase when you are looking to see how the design will cope with identified noise variables.
Suppose you were designing an electrical device and you expected the supply voltage to vary between 4.8 and 5.2, when the device eventually goes into service. You have no control over what happens in service, but you can control the voltage precisely in your laboratory. That’s a typical ‘noise factor’
On the other hand, somebody using the device as part of their process may have no control over the input voltage variations. They could use the classical approach, being careful to randomise the runs so that the voltage, or other sources of variation, could not impose an undetectable systematic error on the results. The voltage error would then be part of the error variation (SSE)in the ANOVA analysis.0August 4, 2005 at 6:27 am #124234Hi Ken Thanks for responding.The main objective would be to identify trends in the average processing time – particularly to warn of increases. There are also a number of ‘sites’ and it would be useful to compare performance across the sites.I’m thinking that EWMA charts may be the best bet, because they are not sensitive to the normal approximation (Montgomery) and they look promising when I apply them to the historical data.RegardsGlen
0January 15, 2005 at 5:21 pm #113531Hello Nick – I am doing a Six Sigma project on voluntary termaintion in a manufaturing facility. One of the strongest correlations I have found is between length of service and accidents – new hires have ~80% of the accidents. Records are not available to correlate length of service to quality & productivity but I am sure it exists. Would like to share more – how can we do that?
0January 6, 2004 at 9:38 am #93973Hi
Let me put up an actual case I came across recently, which is consistent with Gabriel’s third simulation listed below.
It concerns a production line that fills bottles of detergent. Every hour the supervisor weighed a subgroup of four bottles for control charting. The variation within each subgroup was very small.
However the bottle filling machine also had longer term fluctuations (over many hours) also conforming to a normal distribution. These arose from environmental changes, the pressure of the compressed air and so on. In any case the company didn’t think it worth trying to fix it.
The supervisor was having difficulty using control charts. The limits were based on the subgroup range and so points were frequently exceeding the limits on the mean chart because of the long term fluctuations. This led to overadjusting the process (this was apparent from the data).
What were the options?
0December 12, 2003 at 12:58 am #93512Hi
You could use a chisquared test
Regards
Glen
0October 27, 2003 at 12:11 am #91634In the absence of any other explanation I’d guess that somebody was trying to make the point that the customer should come first!
0October 26, 2003 at 11:38 pm #91633Hi Stan
Thanks for the question.
Probability sampling is used extensively. If you want more information the text Business Research Methods by Zigmund covers the various methods, and when to use them, very well. I notice that my copy talks about shuffling bits of paper in hats to select a random sample of savings and loans associations, now its a few keystrokes on Excel!
A couple of examples off the top of my head where you might consider probability sampling over systematic sampling:
You have problems with invoice accuracy. You decide to pull 10% of last year’s invoices for a more thorough check.
You want to gather the views of employees on an issue and decide to sample 10%.
Regards
Glen
0October 25, 2003 at 12:34 pm #91589Hi
If you want a random sample select a column, say ‘A’ and fill rows 1 through 1500 with random numbers from a uniform distribution using ToolsData Analysis Random Number Generation.
Then put the formula below in the first row of another column and fill down to row 1500:
=IF(RANK(A1,$A$1:$A$100,0)<=150,1,"")
You will have 150 rows containing 1 and the rest blank. Use an autofilter as Dave suggested.
If you are happy with every tenth you can save a few keystrokes by using the random number generation to create a patterned distribution to fill the rows with a repeating sequence of 1 through 10.
Regards
Glen
0October 24, 2003 at 10:00 pm #91572Hi Geoff
My partner and I have run an elearning business for over four years and would urge you to look on the positive side.
The fact that we can provide a 365/7 service and yet be free to go on holidays overseas is an enormous advantage to us, not a matter of complaint. A bit of research (and lateral thinking) can reduce the cellphone costs dramatically and in any case all businesses have overheads. My next door neighbour owns a garden supplies business and when I hear his stories about the cost of truck tyres and maintenance I realise how lucky I am.
I rang a potential client in Boston at 11pm our time last night. He asked for names of past customers to contact. My partner emailed a couple to ask their permission. when I got up at 6.30 this (Saturday) morning I saw that she had received their replies, sent their contact details to the guy in Boston and got a reply from him. We don’t think that is something to complain about, We think how exciting it is to be able to do business that way.
On the practical issues of your payment and emailing problems. It’s clear from your posting that your processes need work. Sure we face a lot of issues because we are at the leading edge of technology and the world hasn’t caught up yet, but with perseverance and persistence we have developed processes that do give secure and reliable service to our clients, and are costeffective to run
Best Regards
Glen
0October 22, 2003 at 2:39 am #91326Hi
Going back to the original question I just wanted to point out that it is very dangerous to analyse repeats as though they were replications.
It is also very easy (and tempting) to think you are doing replications when you are actually doing repeats.
If in doubt, assume it is repeats.
Regards
Glen
0October 20, 2003 at 10:03 pm #91273Hi Gabriel
We had a good example of that here a few years ago. A key measure for the company that managed the state ambulance call centre was the response time. Despite it getting good results on that metric there was a lot of public criticism about ambulances arriving late to emergencies, it was claimed a couple of people died needlessly.
It turned out the managers were making large numbers of ‘test’ calls. It claimed these were to make sure the system was working, but the calls were made in quiet periods and the call centre operators were given notice. The test calls counted in the response time metric.
Pyzdek calls this sort of thing ‘Denominator Management’ in his Six Sigma Handbook and gives a couple more examples.
Regards
Glen
0October 20, 2003 at 1:58 pm #91237Hi Statman
Better or worse? possibly worse but let’s not despair.
I’ve got another book here called “Probability and Statistics for Engineering and the Sciences” by Devore (nothing to do with quality, control charts or the like). It says:
“Although S**2 is unbiased for sigma**2, S is a biased estimator of sigma (its bias is small unless n is quite small). However there are other good reasons to use S as an estimator, especially when the population distribution is normal. These will become more apparent when we discuss confidence intervals and hypothesis testing in the next several chapters” (so far they haven’t).
I assumed that C4 was used to compensate for this bias.
I’m not clear from your email whether you are saying that (getting back to S charts):
1) when you calculate the standard deviations of the subgroups each and every one of those standard deviations is biased and hence the average is biased (but can be reduced by using the pooled standard deviation because that effectively increases the sample size) or,
2) the bias is introduced by the averaging operation and can be avoided altogether by using the pooled standard deviation as an alternative to averaging.
My assumption was that the C4 wasn’t used in most applications of the standard deviation because of those mysterious ‘other good reasons’ mentioned by Devore, but that in the case of S charts the argument went the other way because, if the standard deviation was biased, the control limits would be in the wrong place (and would have to be assymetrical about the mean to be in the right place).
Regards
Glen0October 20, 2003 at 1:39 pm #91236Hi Gabriel
Interesting points, my thoughts would be:
Regarding Cp/Cpk, The Gauge R&R is mainly directed at process improvement not conformance to specification. If your process has a good Cp it just means that the combined process and measurement variation is well within specification. Despite your process looking stable and being quite acceptable your measuring system may be masking special causes. If you were looking to improve your process to its optimum through a six sigma activity (despite it already being amply adequate) that might be important. If you don’t so intend then it doesn’t matter.
Yes the 10 parts is surprisingly low. The figure of 30 seems to be tossed around, but in my view it is 50 plus before things start settling down, and the figure of 100 you quote is more like it. That’s why it surprises me they are so fussy about the subtleties of d2 and so on.
The way I interpret the bias rule is that if there is a demonstrable amount of bias (at the 95% confidence level) you should fix it. If you can’t fix it you should adjust for it. I can live with that. However I agree with your point because you are asked to get the concurrence of the customer and that would encourage companies to avoid it by using minimum compliance.
With the Kappa test it does draw attention to using a large number of parts representative of the spectrum, but as you say it is open to abuse.
The bit that bemuses me is on the bottom of page 132 where it says “the team decided to go with these results since they were tired of all this analysis and these conclusions were at least justifiable since they found the table on the web”. What does that mean? it seems to say that the whole thing is a waste of time!
The whole thing seems to be a bit of a grabbag of ideas. For example the number of data categories formula is taken from the First Edition (1984) of the Wheeler and Lyday text “Evaluating the Measuring System”. That is pretty hard to find because the second edition was published in 1989 and most libraries threw it away. However if you go to the trouble of getting both editions you will find that this formula only appears in the first edition. Wheeler and Lyday apparently discarded it in favour of another measure which is supposed to do the same thing. So the manual is calling on a twenty year old (and outdated) book for a formula that was apparently disowned by its originators.
As far as I can find out there isn’t a source where you can get more information. If I can’t understand something in Minitab I can get a paper from their web site that gives a full detailed mathematical explanation – I may not understand it but at least I can rest easy that somebody does!
So I share your conceptual concerns with the manual. On the other hand it is not a standard, only recommendations. If you are going for QS9000 (TS16949) it is up to the auditors whether your compliance is adequate or not. Another factor there is that it not aligned with Minitab, so if you use Minitab for your linearity studies or attribute studies (for example) you will get different results anyway.
Regards
Glen
0October 19, 2003 at 8:27 am #91215Hi Statman
Can you clarify that point about C4. I understood that the whilst the sample variance is an unbiased estimate of the population variance the sample standard deviation is not an unbiased estimate of the population standard deviation. The appropriate correction factor is C4.
(see for example the Introduction to Statistical Quality Control 4th Edition by Montgomery page 92)
I didn’t think C4 was used because of the averaging of the subgroup standard deviations in calculating the limits for the S charts, but because the nature of control charts requires an unbiased estimator.
Regards
Glen
0October 17, 2003 at 10:48 pm #91200Hi Gabriel
I’d be interested to here what else you don’t agree with, why not post them.
By the way have you worked through the calculation on page 88 Table 3 for the t statistic. I get 0.1253 every time, not 0.1153 as stated. This carries through to the dependent calculations.
Glen0October 17, 2003 at 2:38 pm #91180Hi
Thanks for the information.
The MSA Manual Third Edition is available from the Automotive Industry Action Group http://www.aiag.org specifically http://www.aiag.org/publications/quality/fmea3.asp
In Australia you can get it from the Federation of Automotive Part Manufacturers http://www.fapm.com.au but its about three times the price!
It is greatly expanded from the previous version so well worth checking it out.
Regards
Glen
0October 16, 2003 at 1:50 pm #91126Hi
Nice example. Fauzi could extend it to explain interaction (between the cold and water) which is, of course, also confounded with the main effects.
(hope your daughter is ok now)
Glen
0October 16, 2003 at 10:56 am #91108Hi
The confidence limits in the table are calculated from the binomial distribution (the ‘exact’ method). Most statistics text books use the ‘normal approximation’ and will give a somewhat different answer.
The normal approximation formula is”
UCI = p + 1.96(p(1p))^0.5
Minitab uses the exact method and will give you the same answer as the MSA manual.
Regards
Glen
0October 15, 2003 at 10:03 pm #91080Hi Mike
I’m not quite sure what your question is. Are you asking:
1. why you use the R/d2 method to calculate the SD rather than just apply the formula SD = sqrt (xi – xbar) etc. etc. (as found in statistical text books) to all the data (ignoring the subgroups) or,
2. Why you use the range method with subgroups rather than calculating the standard deviation.
The answer to the second is simplest. It is easy to do it by hand and for smallish subgroups (four or five) the loss in accuracy (efficiency) is small. It is a reducing advantage with modern calculators and so on, but it has become a well established practice and inconvenient to change.
The answer for the first concerns what happens if the process mean drifts (as it often does and as the sigma to ppm calculation assumes it will). The range method ignore shifts to the process mean between subgroups. To take an extreme case, suppose you take a sample from a machine and get:
49, 51, 50, 49
now adjust the mean of the process by, say, 10 and take another sample:
59, 60, 59, 61
The range method will give a SD based on two ranges of 2, the mean shift is ‘invisible’ to this method. The theory method will be affected by the mean shift, and the SD calculated from all 8 points would not be a good estimate of the process variation.
You should be able to see how this relates to the process capability calculation.
If you use the ‘theory’ method to calculate the SD of the whole data set then the indicator you have calculate is the Process Performance Index Pp (needless to say you could not use data where the machine had been adjusted part way through, that was just to illustrate the point).
Regards
Glen
0October 8, 2003 at 2:42 am #90768Hi Statman
My MSA Manual is the third edition published March 2002. It is an extensive rewrite from the original edition (I don’t have the second).
It has some very significant changes, for example in the Gauge R&R studies the ‘K’ factors they use have all been changed by a factor of 5.15 so that repeatability variation is now in standard deviations instead of 99% confidence intervals. I know of one company that was puzzled as to why their spreadsheets were all wrong!
You are correct about the linearity study, the same argument doesn’t apply because they don’t use the range method in the standard deviation calculation.
.
0October 7, 2003 at 10:34 pm #90764Hi Statman
The paper by Woodall and Montgomery, that Doc mentioned, is interesting. It gives a different perspective on the distinction between d2 and d2* and when they should be used. It also discusses how to derive d2* from d2 which satisfies another curiosity.
It also proposes a third version which is d2 divided by d2*squared which it claims minimises the MSE
This may be a clue to the answer to my original question about the formula for the confidence bounds in the bias formula in the MSA Manual because multiplying a standard deviation obtained using R/d2* by d2/d2* gives this third version.
I’m still intrigued by why it crops up in the bias formula though, is it based on Woodall and Montgomery’s views or did they arrive at it by some other route? Also why doesn’t if follow through into the confidence limits for the linearity study?
0October 6, 2003 at 1:33 am #90660Thanks Doc, but why are they used in that particular formula? The standard deviation of the repeatability is calculated using d2* and so already has the correction for the small sample size.
I would have thought that the standard deviation of the bias was already the best estimate and could be used directly in the t test.
Obviously d2/d2* is some sort of correction factor for the t test, but I can’t see why it is there. The values of d2 and d2* are pretty similar with the sample size that is used in bias calculations (> 10) and so it doesn’t make much difference, but I am curious.
0October 4, 2003 at 12:15 pm #90641Hi Michelle
All texts on statistics or quality control would include standard t tables (except the AIAG MSA Manual which I suspect you are studying).
Alternatively you can use Excel:
=TINV(probability, degrees of freedom)
Look at the Excel ‘help’ for guidance on how to use it, particularly which ‘probability’ value to use.
Regards
Glen
0October 2, 2003 at 11:58 pm #90593Hi
I’m Margaret’s partner and I’d like to add a few thoughts to Margaret’s response to Susan’s query
We believe that a SSBB course must contain a workbased project, and each student must be coached oneonone with this project. That means the coach must be familiar with the project and study the data in depth.
Being an effective black belt is difficult, even for university graduates. Students typically show three major weaknesses; not being data driven (taking the confident assertions of the process owners at face value); not seeing ‘obvious’ clues in the data that require follow up; not observing the process closely (seeing what they think happens, rather than what actually happens). It really needs a good, and attentive, coach to challenge and stretch them. This may be possible by electronic communication, but it is very time consuming.
The classical model for the six sigma methodology is aimed at the large corporate organisation, and is driven by the senior management. It involves a very significant investment in recruitment and training.
Susan sounds to be typical of our online students. She does not work for an organisation that provides access to Master Black Belts, nor will it fund extensive study. She wants to explore the methodology for herself to find out if and how, it applies to her organisation.
Susan will find many of the tools used by six sigma very valuable in their own right (they were around long before six sigma). She will also find that the DMAIC approach is very systematic and effective and will yield excellent results if (and it’s a big ‘if’) it is applied in a disciplined, data driven, manner.
When it comes to online learning, I find it is particularly suited to mathematically based topics such as SPC, DOE, MSA and so on. Well designed courses can use simulations and activities that are not practical in face to face courses. Questions from students can also usually be expressed in ways that are appropriate to electronic communication. For example, students email me a spreadsheet with their analysis from a simulation. I correct it, comment appropriately and send it back. I can’t do that face to face.
Mathematical subjects take time to absorb, the natural pace of a face to face class is too fast. The students may grasp the essentials but then have to go home and study to consolidate it. This often prevents face to face students from asking questions, by the time they think of them the opportunity has passed.
I teach these subjects facetoface to undergraduate and matureage postgraduate university students, to industry practitioners on short courses and workshops, through distance learning (with email support) and through online learning. I’d say that online learning is the most effective.
In the case of the ‘softer’ topics (QFD, FMEA etc.) the situation is different. The face to face workshop can take advantage of the group dynamics in activities (it is very easy to give the students a taste of brainstorming in a facetoface workshop, how do you do it online?).
For that reason many of the offerings that I’ve seen in that area use what is effectively a Power Point presentation format together with multichoice questions. Fair enough to introduce the topic, but does not give the depth necessary to make best use of it in the workplace.
In summary, all types of learning have their strengths and weaknesses. It depends on the subject matter and the options that are available to the learner.
Regards
Glen
0 
AuthorPosts