## Joel Mason

@joelmason35Member since July 27, 2016

was active active 1 week, 1 day ago## Forum Replies Created

- AuthorPosts
- June 3, 2019 at 10:20 am #239592

Joel MasonParticipant@joelmason35**Include @joelmason35 in your post and this person will**

be notified via email.@sharmin – Here are few more thoughts in addition to what others have said:

1. Did you take the time to establish ground rules for the team? If not, that might be contributing to your experience. I believe it isn’t too late to establish them if you haven’t.

2. Does the process owner (which sounds like someone I would call “champion” in my world) understand the why of Six Sigma? Do the team members understand the why of Six Sigma? If not, you might need to spend some time giving the why of the methodology and toolset. And if they have not embraced the why of the project itself, it’s aim, then that would be a gap as well. In Simon Sinek’s words, start with why.

3. Did the process owner charter the project with your assistance? If the process owner did not lead the chartering and the kick-off, that’s a gap from the start in my opinion.

4. Are you dominating the conversations? I’ve seen cases where Black Belts were so active in their facilitation that the team members began deferring to the project lead when they shouldn’t have. And the team members did so almost unknowingly.if I’m asking myself “should I raise the alarm?” the answer is generally ‘yes’. That’s just my personal experience. How to raise the alarm and what to do about it are other questions. I believe the alarm starts with you being very direct with the team first – not going to the VP of quality and the champion first. Being direct is a lot easier for me when I have agreed upon ground rules of behavior, a clear charter, and an established level of trust that I’ve built. Handle this conflict inside the team first if at all possible. If you’ve already done that and the reason for the lack of progress really is other priorities in the business, then I’d say that is reason to have a conversation about priorities with the process owner and your VP of Quality. Best wishes to you, I suspect we’ve all been in your position. On the bright side, some of the deepest professional relationships I have now are ones that endured conflict like this. We came out on the other side with deeper relationships than before.

0February 18, 2019 at 11:27 am #236435

Joel MasonParticipant@joelmason35**Include @joelmason35 in your post and this person will**

be notified via email.Thomas Rust of Autoliv gave a great presentation on attribute MSA at the 2016 ASQ Fall Technical Conference. To get the recordings, you can contact ASQ. I’m not sure what the charge would be, but I’m guessing it will be marginal. I have found their costs to be very reasonable. Since you have a vision system, you more than likely have an underlying variable characteristic even though it turns that information into a pass-fail judgment. In his presentation, I thought Thomas did a nice job communicating how you can leverage underlying variable characteristics in an attribute MSA. For example, the vision system is more than likely calculating a count of pixel matches. With the resolution now available with these kinds of systems, that kind of data approaches a continuous measure. Leveraging that can get you around the sample size problems Mike mentioned in an attribute agreement analysis.

Joel

0February 18, 2019 at 11:07 am #236433

Joel MasonParticipant@joelmason35**Include @joelmason35 in your post and this person will**

be notified via email.I’d like to add more detail to what @MikeCarnell offered, because I suspect his point is at the heart of your question. Looking at a standard normal probability distribution (mean = 0, standard deviation = 1), 3.4 x 10-6 of the observations lie to the right and left of 4.5 standard units away from the mean in both directions. That means the area under the standard normal curve on the right tail and the left tail beyond 4.5 sigma from the mean is 3.4 x 10-6. That’s what Mike means by saying it is a shifted value. Companies like Motorola that were early adopters of what we call Six Sigma understood that most of their processes in fact did not have stable central tendencies. They knew that their processes “wondered around” a little. Walter Shewhart knew that as well back in the 1920’s when he was working for Western Electric and developed the first SPC charts that we still use today. Without modeling that “wondering mean”, what do you do? Generally speaking, you give yourself room for it to happen. That’s why most DPMO tables that you will find online and in literature allow for a 1.5 sigma shift in the mean. So 6-sigma DPMO allows for the central tendency to shift right or left by 1.5 standard units.

Over the years, academics and industry professionals have figured out elegant ways of modeling processes that have central tendencies that move. As you progress in your mastery of the six sigma body of knowledge, I hope you allow yourself to learn more about those approaches. Good luck.

Joel

1September 10, 2018 at 6:57 am #203010

Joel MasonParticipant@joelmason35**Include @joelmason35 in your post and this person will**

be notified via email.Cornelis,

I believe your scenario can fit into a traditional Gauge R&R experimental structure. In your case, I believe you can treat your probes as “operators” in a traditional Gauge R&R sense. You have two probes that are going to be exposed to the exact same batch each time. I would think of your batch as your “parts” in a traditional Gauge R&R sense. It sounds like you have some engineering experience that tells you that your measurement uncertainty will change with the batch (which would be like a linearity problem as Daniel mentioned). How many batches can you afford to mix for this measurement validation? In a traditional Gauge R&R with 2 operators, in my industry we would want 10 parts (or batches in your case) that would span the entire specification range of whatever characteristic it was that we are measuring and we would want 3 replicates (that means each operator measuring each part 3 times in a blind and randomized fashion).

And Daniel is correct – a traditional Gauge R&R will not expose you to linearity and bias concerns. Linearity is a changing bias across the operational range of the measurement system. Bias is, well, bias – it’s an “offset” of the true value. In order to do that study, as Daniel indicates, you would have to have some samples of known pH for which to expose your probes.

Everything I’ve said assumes you are interested in quantifying the uncertainty of your measurement system, not setting control limits around the food production process itself. In my industry, we typically want to quantify the measurement uncertainty first, and we then perform a capability study once we understand the uncertainty due to the measurement system and have ensured we have a capable measurement system for discriminating good from bad. That is because in a capability study (where control limits will be calculated as a result), it has variation due to the process and variation due to the measurement system all thrown together. Best wishes.

If you want to discuss this further, indicate that in a response and perhaps we can connect by other means.

Joel

John Deere0January 30, 2017 at 8:04 am #200430

Joel MasonParticipant@joelmason35**Include @joelmason35 in your post and this person will**

be notified via email.Ravi,

I have no disagreement with either contributor. They’ve given you good advice. What I have to offer is a slight repackaging. I thought it might be helpful to give you the formula for the confidence interval for a proportion:

p = proportion

n = sample size

alpha = confidence level. For 95% confidence level, alpha = 0.05. It’s the risk of rejecting the null hypothesis when you shouldn’t. In the formula below, you can think of the null hypothesis as “the true population proportion is what I measure in my sample”. Here’s the formula:

confidence interval = p +/- Z(alpha/2) * sqrt(p(1-p)/n)

Z(0.025) = 1.96 for 95% confidence

My post here does not propose a way for you to assess your compliance to a specification, because you haven’t given us your specification, unless your client has just given you a specification of “give us 98% good transactions”. I’d just like to point out that given the above formula and an assumed proportion of 0.98, you’d need a sample size of 250 to give you a confidence interval of 98% +/- 1.7%. A general rule of thumb when calculating a confidence interval on a proportion is that n is “sufficiently large” if the quantity of n(1-p) is greater than or equal to 5 and the interval does not include 0 or 1. What this means is, if you sampled 250 records and found that 98% were good, statistically you could only say that you are 95% sure that the true population proportion is no worse than 96.3% and no better than 99.7%. And that of course assumes that your sample is truly random (meaning each record as an equal probability of being audited), it is representative of the population, and free from bias. Good luck.Joel

0January 3, 2017 at 5:59 am #200335

Joel MasonParticipant@joelmason35**Include @joelmason35 in your post and this person will**

be notified via email.Preddy,

I agree with Robert Butler’s comments. If you are interested in pursuing Logistic Regression and are a Minitab user, Minitab conducted a short workshop on Logistic Regression at the 2016 ASQ World Conference on Six Sigma. I’m sure they will provide the slides if they are not already loaded onto the conference website (they sent them to me afterwards). I have also used Statistica to do the kind of logistic regression Robert mentioned. I have not used JMP. At the 2012 ASQ Fall Technical conference, Douglas Montgomery did an excellent two-day short course on General Linear Models, including Logistic Regression. I have mentioned resources in my reply that are aimed at practitioners wanting a fairly quick start in the subject as opposed to more academic resources. Good luck.

Joel

0November 7, 2016 at 8:30 am #200212

Joel MasonParticipant@joelmason35**Include @joelmason35 in your post and this person will**

be notified via email.Jamie,

The advice of the other responders is good and it might be enough for you. But if you are interested in quantifying variance due to different measuring devices in your gage R&R, then I think you need to consider an expanded gage R&R. In a traditional gage R&R where measurements are taken with a single measuring device, using an ANOVA method for analysis, you are able to parse out the variance into:

1. part to part variation

2. gage variation (repeatability)

3. operator to operator variation (reproducibility)

4. operator to part interaction (also considered reproducibility)

In an expanded gage R&R, you expand the study to include multiple measuring devices. That will allow you to understand additionally:

5. gage to gage variation

6. gage to part interaction

7. gage to operator interaction

There are some good ASQ papers out there that would help you set up an experiment if you wanted to pursue an expanded gage R&R.Joel Mason

John Deere0November 7, 2016 at 8:02 am #200209

Joel MasonParticipant@joelmason35**Include @joelmason35 in your post and this person will**

be notified via email.Rob,

This is my first time responding to a post in this forum. Based on your description, I recommend you investigate Functional Response Experiments. When the response of an experiment is a series of data points collected over a continuum, that is known as functional data in literature. At this year’s ASQ fall technical conference, there was a paper by Mona Khoddam from Arizona State that dealt with this topic in the context of mixture experiments. As if functional data wasn’t already complex enough, she put it in the context of mixture experiments which promptly blew my mind.

Functional Data Analysis (FDA) is presently over my head. But put simply, consider that when your response is a curve and not a single data point (hypothetically think of pressure on the y-axis and speed on the x-axis or something like that or maybe time is on the x-axis), then that response has a model that describes it. With DoE, we can think of the parameters of that model as the dependent variables. And I believe the analysis method for analyzing the results of the DoE is FDA.

Best of Luck (and when you figure it out, let me know how you did it)

Joel Mason

John Deere0 - AuthorPosts