Home › Forums › General Forums › Methodology › Defects with Large Standard Errors

This topic contains 17 replies, has 5 voices, and was last updated by John R Walkup 2 months, 3 weeks ago.

Viewing 18 posts - 1 through 18 (of 18 total)

- AuthorPosts
- August 16, 2018 at 1:23 pm #706796
There is a problem that has really been bugging me. I own a copy of Gitlow and Levine’s book (which is fantastic, by the way) but I didn’t see it addressed. Here goes:

I have created a new lab activity for our physics department where students build ball bearings of clay to meet a client’s specification for its mass (1.80 grams). Students use the PDCA cycle to continuously improve their output, using statistics to determine whether they will have fewer than 1.5Xsigma defects. Once they hone their manufacturing process down, they then build a larger shipment of bearings which are then measured by the teaching assistant for defects. So far, all is going well.

We ran the lab as a pilot last summer to iron out some of the difficulties. We found that it is difficult to build the bearings without a sizable systematic error and random error. So we end up with a distribution that has a peak shifted toward one of the specification limits and has a wide standard error.

Because of the wide standard error, it isn’t clear how to compute the probabilities of getting a certain number of defects. All of the treatments I have seen assume that the distribution is centered on the sample mean. With a large standard error, that seems a bit idealistic. What does one do in this situation?

(If this is covered in Gitlow/Levine, my apologies to the authors.)

One more question: Has anyone seen this type of lab carried out before? Just curious.

August 16, 2018 at 10:47 pm #706799The source of variation may be the raw material. The mass/volume ratio of clay can vary considerably due to mineral and water content and since your only specification is mass… And it can change during handling as water content is lost. We know that the output is a function of the inputs but we often forget that and focus on measuring the output, which doesn’t help us to improve all that much.

August 17, 2018 at 4:04 pm #706807The sources of error is another aspect of the lab that my students need to address. For now, I’m just trying to figure out what to do when the standard error is so large that we cannot expect the population mean to reside close to the sample mean.

August 18, 2018 at 7:18 pm #706815Fair enough. I assume you know that a large standard error suggests there’s something wrong with the model. Maybe someone who’s more expert on statistics will chime in with some better advice. I’d still consider improving control over your raw material so you know they’re starting with a consistent mass/volume ratio. Also, are you giving them some way to measure diameter and sphericality? Without some sort of manufacturing control you’re going to see wide variation.

August 18, 2018 at 9:36 pm #706817Keep in mind that the activity is pedagogical. I actually want variance in the results because that’s the only way the students will learn how to handle error. I can buy clay that is more uniform, but I’m trying to focus the lesson more on process, not materials. If the results are really good, there is less impetus for students to revise their processes to improve output.

August 19, 2018 at 12:03 am #706818Re-reading your initial post, you say that the peak is shifted toward a spec limit. If it’s consistently shifted toward either USL or LSL that suggests your modelled process has some sort of bias. Otherwise the wide standard error you’re seeing may result from inadequate sample size. For a pedagogical exercise it may seem impractical to increase sample size but consider that otherwise you may not convincingly demonstrate the fundamentals.

August 19, 2018 at 9:47 am #706819@rogerd Just need to get clarification on a couple points. Why is it an issue if the distribution is not centered? It isn’t all that unusual to not be centered for cost reasons. I would think if you can build bearing with less clay (skewed towards to LSL) then you could talk to them about the Taguchi Loss Function so they learn about cost and business.

You said you wanted to have students focus on process not material. If you had the clay with less variation i.e. better clay, that would force them to look at process. If you have lower quality clay and it has more variation you are inflating your standard deviation. I would think something like clay is a natural go to place for a student. If there wasn’t anything there to find it would seem that is a pretty good lesson in don’t always blame the material.

August 19, 2018 at 11:19 am #706822Part of the lesson is to demonstrate the need to remove bias because it shifts the distribution close to one of the specification limits, driving up the number of defects.

August 19, 2018 at 11:29 am #706823To me, the problem isn’t that the distribution is off-center. We can account for that simply by calculating Z-values for both tails. My problem was that a large standard error indicates that the distribution for the entire shipment (rather than just the sample) could be far closer to the specification limit than the sample indicates. And I’m curious how that is handled in the industrial world. It seems to me that the estimating the number of defects from the sample distribution would be fraught with peril.

Perhaps this is not a problem in industry, but when I ask students to build clay ball bearings the large standard error becomes an obvious hole in my lecture notes. So, I’m trying to fill in this hole as best I can.

August 20, 2018 at 7:13 am #706831Standard error is simply the sample standard deviation divided by the square root of the sample size. To reduce the standard error, all you have to do is increase the sample size.

August 20, 2018 at 8:59 am #706833But what if you can’t increase the sample size or it is cost prohibitive?

August 20, 2018 at 3:59 pm #706836Unfortunately there isn’t anything else you can do to improve the precision of your estimates, but you can at least quantify it with confidence intervals. Here are a couple of articles on calculating confidence intervals for Cpk:

https://www.qualitydigest.com/may00/html/lastword.html

http://www.indium.com/blog/calculating-confidence-intervals-on-cpks.phpIf you are using statistical software such as Minitab for your analysis, there may be an option for it to include this calculation for you. (In Minitab’s Capability Analysis, click the Options button and check the box for Include Confidence Intervals.

August 21, 2018 at 8:09 pm #706847@rogerd Could you share exactly what you want your students to learn from this exercise? It seems to me that the large standard error, it’s implications and potential causes is a worthwhile teaching point. As for what we’d do in industry, as stated in my comments and others: Consistent raw material. Increase sample size. Reduce manufacturing process variation — If I was given this exercise I’d want to find a way to make the clay ball bearings consistent in size rather than just eye-balling it.

August 21, 2018 at 8:55 pm #706849Here is what I want my students to learn:

1. More consistent raw material will naturally produce smaller standard error, but what if the material costs 25% more? You going to buy it? If you are producing too many defects, probably. But what if you’re not? Now your costs go up by (say) 10% and you start losing contracts to other firms. If you cannot estimate to reasonable precision the number of defects you are likely to produce, how are you going to make that decision?

2. We can increase sample size, but what if you’re paying for each sample measure? Is increasing the sample size a cost-effective solution?

You are right, you would want to find a way to make the ball bearing sizes more consistent than eye-balling it. In the second or third rounds of PDCA, some students have decided to roll cylinders of the clay and cut it in equal intervals It helped a bit, but rolling a cylinder of equal diameter is a challenge. Some decided to have only one person roll the bearings, and that helped a little in terms of random error, but it increased systematic error.

In other words, this lab is all about learning data-driven decision-making. Yes, we can buy a bearing-rolling machine and get them all real good, but they’re not going to learn from that.

August 21, 2018 at 8:57 pm #706850Yeah, but that costs money. And the number of bearings my students can sample is fixed. They’re not allowed to increase the sample size. (If allowed, they would simply measure every bearing they produce and wouldn’t learn anything.)

August 21, 2018 at 8:58 pm #706851Thanks for the resources. I’ll check them out.

August 22, 2018 at 1:41 pm #706857The issue is risk management… as I am sure you know, i.e. quality is simply a specific type of risk management (isn’t everything). So, framed that way, you’d like to know the probability and impact of sending defects to your customer and gain the best knowledge available to make the most informed decisions. For now, we are only dealing with probability. So, you take a sample and get a result. The standard error tells you that your sample result is not precisely giving you the full story. In other words, if you based decisions off the one sample, your confidence for taking action is low. Of course, the action you are taking is based on where the sample fell relative to specifications. Anyway, the first risk is that, given a sample mean, you can calculate the probability of defects. But the question is, should I go with that given a high standard error. This logic may lead to a lesson as follows:

Based on the nature of standard error (the formula), you know that it can be reduced by either increasing sample size or decreasing variability… so you may need to do a correlation analysis on consistency of raw material to bearing mass variability. If there is positive correlation, then you know by reducing inconsistency in raw material, you will reduce standard error. If you reduce standard error, you will have more confidence in the estimate that leads to action. In other words, you will be more confident in the conclusion of whether or not your process is capable… or how capable it is.

Back to the risk management lesson (because that is what I think this ends up being). The real lesson to the student is a classic treatment of the relationship of standard error to standard deviation. Taking action on what a sample is telling you should be influenced by standard error. If you can reduce standard error, do so. If you cannot, proceed with knowledge, i.e. know the risk of your decision.August 22, 2018 at 3:44 pm #706859Yes, that’s what I have in mind. Thanks for expressing it so eloquently.

- AuthorPosts

Viewing 18 posts - 1 through 18 (of 18 total)

You must be logged in to reply to this topic.