# rams

## Forum Replies Created

Viewing 22 posts - 1 through 22 (of 22 total)
• Author
Posts
• #190036

rams
Participant

Dear Sensei,

Thank you for clarifying the difference between variation and variance. So if a process is precise but not accurate, in six sigma parlance this would still be called variation. Got it.

0
#88743

rams
Participant

tony,
here’s a simple formula for type I error = 5% and type II error=5% (ie, power = 95%):
n = 121/lambda^2
lambda is sigma fold, usually from 2 to 4. if let’s say, you choose lambda=2.72, that means, you want to detect a change in sigma as much as 2.72 times with a false detection rate of 5% (type I) and a missed detection rate of 5% (type II). The calculated sample size is around 17 samples per group.
rams

0
#88390

rams
Participant

Also, that is why we use Xbar charts to achieve normality of data by invoking the central limit theorem.

0
#87449

rams
Participant

You can use the catapult in the entire DMAIC. You can use it as an exercise for : Process Mapping, Gage R&R, Capability Study and Variation Study, DOE, and Control Charting.
It’s very effective as students can relate all exercise and be able to appreciate the DMAIC process.

0
#86688

rams
Participant

Maybe this is an alternative:
If in case you are not really sure if there are quadratic effects, then try a 2×2 using the low and high levels of your factors.
Check for lack of fit or validate runs on the midpoints to check for curvature.
If no significant curvature, then the experiment is OK.
Else, augment the design to form a central composite design (face-centered).
This is assuming your factors are continuous.

0
#86243

rams
Participant

I always stress this point during my DOE training… before you collect data for experiments, make sure process is in control. I illustrate this using the “catapult.”
I shoot the ball using a low and high start angle and record the data. Participants are surprised that the lower angle produced a longer distance. This is not logical.
It’s because when I set the catapult at the higher angle, i pushed the ball to hard into the cup resulting a lower distance. This is a case of lack of SOPs which makes the process out of control.

0
#85042

rams
Participant

RTY simply means the proby of manufacturing a defect free unity. it’s up to you if this metric is important to u and to your company as well.

0
#81231

rams
Participant

Has anyone use the Johnson Curves ?
http://www.qualitydigest.com/dec99/html/nonnormal.html

0
#81230

rams
Participant

there is no restriction on sample size as long as no more than 20% of your “cells” have an expected value of less than 5. This is a warning you get when you use SAS JMP (not sure for other software).

0
#80912

rams
Participant

Ask the students to write 5 capital letter “A”s. Next, instruct them to write 5 more As, this time using the other hand.
Here’s how you explain it:
Imagine you are a company producing capital letter As. How long have you been producing these As, or in other words, how long have you been writing these As? The estimate should be around 20 to 25 years (depending on your audience).
Now tell them, they have been writing the As for so long and yet, the first set of As are not perfectly alike (but they do look similar). This is because of common cause variation. You can not eliminate them, but you can improve on them.
Now take a look at the next set of As. Do you think the difference is attributed to common cause? Of course not. This is what you call special cause variation. What was the assignable cause ? It’s the changing of writing hand.
Hope this helps.
Rams

0
#80787

rams
Participant

You can also take a look at your gage resolution. Your equipment might not be capable of detecting changes in measurements.

0
#80782

rams
Participant

I have attended one FMEA training and it uses building blocks for exercise.
The objective is to build a car using the building blocks following some specifications, like, weight, length, height and functionality.
We would then develop a design FMEA on the car we built.

0
#80736

rams
Participant

Hemanth,
Now I am confused :-).
I thought Cpk computation is based on “within subgroup” variation because of R-bar.
Enlighten me.
rams

0
#80732

rams
Participant

Just don’t forget to use good engineering judgement, prior knowledge and common sense when assessing the importance of each effect.
rams

0
#80645

rams
Participant

CT,
When you jumble the arrangement, you don’t get the same difference of each pair. In this way, result will be different.
If you use 2-sample t-test (2 independent) samples, you’ll get the same result.

0
#80636

rams
Participant

Some references state that for one-sided spec, Cp=Cpk. Some references state it does not exist. I prefer the second since Cp measures how potentially you are capable of meeting requirements, and requirements refer to your specification range (only applies to 2 sided spec).
rams

0
#80635

rams
Participant

Great discussion guys.
Same thought as Eileen, I recommend adding midpoints to test for non-linearity. Should there be evidence that the effect is non-linear, then just add the “axial” points to complete a central composite design.
rams

0
#80633

rams
Participant

When performing this test, maintain the order of the data because they are “paired” (not independent). Else, you are performing a 2-sample test where the assumption is that the 2 population are independent. Paired t-test is like performing a one sample test on the difference of each paired data. Your null hypothesis is Ho : difference = 0.
Hope this helps
rams

0
#80629

rams
Participant

Agree that hypergeometric is used in this situation.
But what if you apply it to high volume production with let’s say a lot size of 5000 and above?  I had problems using the hypergeometric function in the excel software as it is somewhat limited to a certain factorial value (hypergeometric uses the factorial function).
My solution ? For very high lot sizes, I simply used the binomial distribution which assumes proby of a defective is constant (sampling with replacement); or I use the poisson distribution to estimate the binomial distribution which I usually do.
Not sure if I violated some assumptions or principles.
Rams

0
#80455

rams
Participant

Ross,
thank you so much. let me know once you have the details.
Rick,
thanks for the info. will purchase the book ASAP.
Rams

0
#80419

rams
Participant

Mike/Rick,
Thanks a lot.
I am interested in the Short Run SPC techniques. I would like to know how would you use the techniques on one-sided parameters (which means I can’t plot “deviation from target” as the characteristic is higher the better).
By the way, the characteristic is ballshear strength on semicon devices.
thanks
Rams

0
#79987

rams
Participant

I assumed that having only the Cpk, you have a short-term sigma, overall average and your spec limits. Having these data, you can do the following to estimate your Ppk:
1. compute the total probability of getting out of spec measurements (assuming Normality of data) on both the LSL and USL.
2. Calculate the worst case estimate of “z-score” by assuming all defects are in one-tail. This estimate is your “z short-term.”
3. Estimate the “z long-term” by subtracting “1.5.”
4. Since z long-term = 3*Ppk, Then Ppk = z long-term / 3. This is what we call an “educated guess” since you don’t have an estimate of your long-term sigma.
Ref : BMG

0
Viewing 22 posts - 1 through 22 (of 22 total)