iSixSigma

tgause

Forum Replies Created

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • #241035

    tgause
    Participant

    @cseider: I agree with you, Chris. I wouldn’t feel comfortable making any kind of final judgement based on 2 or 3 data points on a control chart. I would definitely do my best to get more. However, if, for whatever reason, I couldn’t get more than 3 at that time, then, according to the authors, I could go with it and make inferences about the process. Although, I’d be cautious.

    0
    #241034

    tgause
    Participant

    @mike-carnell:

    To be clear, I was NOT suggesting that we replace the UCL and LCL with USL and LSL. I fully understand why the control limits are there, how to calculate them, and what they help tell us, the readers. They are a must on a control chart.

    I was wondering what harm, if any–other than possible confusion on the reader’s part–would come from including specification limits in addition to the control limits on a control chart? In other words, there would be 5 lines on the control charts:

    • the mean – required
    • the two control limits (UCL and LCL) – required
    • and the two spec limits (USL and LSL) – optional

    By having both the control and spec limits on the chart, wouldn’t we be able to determine not only if the process is in control, but also if it’s meeting customer specifications?

    0
    #240850

    tgause
    Participant

    @rbutler, @straydog, @mike-carnell – Thank you all for your responses.

    Gotta ask: You guys seem to respond to most questions I’ve seen posted on this forum. Is this your day job or something? Do you guys sit around all day answering people’s questions on this forum? Every time I’ve posted a question, I know I can count on getting a response from one, if not all of you (and I look forward to it!).

    Thank you for all you do and for sharing your knowledge and expertise with the rest of us! You’re helping to make the world a better place when you share your passion with others. And it’s apparent you’re passionate about process improvement. I look forward to talking with you again soon.

    Your humble student,

    “Grasshopper”

    1
    #237200

    tgause
    Participant

    @rbutler – Thank you for responding. I took your advice and found an article that I think will help: https://www.isixsigma.com/tools-templates/capability-indices-process-capability/process-capability-calculations-non-normal-data/

    0
    #236700

    tgause
    Participant

    @Minitab – First, thank you for taking the time to respond to my question and for clearly explaining what was going on. Second, based on your response, I realized I had made an error in my post. I should have said the normality test “failed to reject” the null hypothesis.

    0
    #235895

    tgause
    Participant

    @rbutler, I found the problem. After 10 minutes of pulling my hair out and doing the calculation over and over and over and always getting the same result of 0.61 (definition of insanity, right?), I decided to put on my Green Belt hat and do a root cause analysis. What could be the root cause of this problem? So, decided to do an experiment of sorts. I did the calculation on a different calculator, thinking, maybe…just maybe…something weird is going on. And sure enough, something weird was going on. It wasn’t my calculations. It was my calculator. When I typed in all 9’s, I discovered the LCDscreen is messed up. The 4th digit from the left appears as a 5, not a 9! Calculator’s in the trash. Thanks!

    • This reply was modified 10 months, 1 week ago by tgause.
    • This reply was modified 10 months, 1 week ago by tgause.
    0
    #202756

    tgause
    Participant

    Mr. Butler,

    Allow me to clarify the situation for you, so you’ll better understand what I’m asking and why I’m asking it. My apologies for not being clear previously. To quote Julie Andrews in the Sound of Music, “Let’s start at the very beginning, a very good place to start.”

    I’m coaching a colleague on a project she’s leading. She approached me and said that based on some data she received, she believes there is an issue with bills my company is processing; namely, that bills are being processed incorrectly, which is causing rework for us and our customers. I asked her what the magnitude of the problem is. In other words, we process literally millions of bills annually. How many of these bills that we annually process are defective, based on the specific issue she’s referring to? 100? 1,000? 10,000? She didn’t know. So, I suggested we estimate the proportion of bills that are defective out of the total population of bills we process annually. Since we cannot audit the millions of bills we process annually, we could pull a sample, and based on the proportion of those bills that are defective in the sample, we could create a confidence interval for the population proportion.

    We began by calculating the sample size needed to give us a 95% confidence interval with a 5% margin of error (MOE, that’s the acronym I used previously). I used the Excel spreadsheet attached to calculate this. This is a spreadsheet I found online somewhere (I don’t recall where). Since this is a study we’ve never conducted before, I entered .5 in the proportion of success and failures fields. It showed she needed to pull 385 samples.
    She pulls the samples, audits them, and tells me that out of the 385 audits she conducted, 93 (or 24.16%) met her criteria as a defect. Confidence interval= 19.88% to 28.43%. Boom. Done.

    A day or so later, I’m having a conversation with one of my managers and I’m telling her about this project. She says to me, “You didn’t need to have her audit 385 samples. That was a waste. She only needed to do 30.” Huh? 30? “Why 30?” I asked. I don’t recall the exact response I got, but whatever it was made no sense to me. But I do vaguely recall learning in “Green Belt” school that all you needed was 30 sample (because it was a “large” sample) to do certain tests. I didn’t remember much about it, so I Googled it to learn more. Needless to say, that only made the problem worse. I searched websites, watched YouTube videos, read white papers, black papers, and every other colored paper you can imagine. While people attempted to explain it, I couldn’t understand it. Some places said the “30 sample” rule is a myth. Others claimed it’s true. Contradictions everywhere. No where could I get a simple, easy-to-understand-in-laymen’s-terms explanation of this “30 sample rule”. But I know I’ve heard people say this before now. This was not the first time. I’ve just never questioned it until now.

    Hence, my friend, why I posed the question here. So, again, if 30 is a sufficient number of samples that I can use to create a confidence interval for a population proportion, then why is this calculator telling me I need 385? If 30 is sufficient as I’ve heard people say, why do we have sample size calculations or calculators? And, by the way, when I do this same calculation in Minitab v.17, it tells me I need 402 samples! <banging head on desk>

    This is one site I found. I think the answer is in the first paragraph, but I’m not sure. http://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Probability/BS704_Probability12.html

    0
    #202744

    tgause
    Participant

    Mr. Butler,
    Thank you very much for your detailed explanations. I greatly appreciate the time and effort you made to articulate your responses. If I may, I’d like to continue this discussion and request your further reply.

    You stated previously, “The fact remains the minimum number of samples needed to make inferences about a population is 2. With 2 samples you have an estimate of the mean and the standard deviation and you can use those results to test for differences between your sample mean/variation and another sample mean/variation or a target mean/variation.” I’d like to validate my understanding of this, please, using an example? In my line of work, my company does bill review for other companies. I’m currently working on a project now aimed at reducing the number of bills that were reviewed or processed incorrectly (which we’ll refer to as a defect). My team is attempting to measure the proportion of bills that have been processed incorrectly out of a population of bills. Say, in a 6-month period, we processed 10,000 bills. I’m trying to understand how many bills we would need to audit to estimate, with some certainty, the total proportion of bills in the population that were processed incorrectly. Clearly, we cannot audit all 10,000. So, what’s the minimum number we have to audit?

    According to an Excel spreadsheet I have that calculates the confidence intervals for proportions, I would need to audit 385 bills to have a 95% confidence level with a 5% MOE (since we have no previous data to go on, I also entered .5 for both the proportion for successes and failures). Am I to understand from your previous comment that instead of auditing 385, I could audit 2 and the results of those 2 would be sufficient for me to make inferences about the 10,000 in the population?

    If not, would you kindly explain to me where my understanding has gone awry?

    0
Viewing 8 posts - 1 through 8 (of 8 total)