iSixSigma

Thaly

Forum Replies Created

Forum Replies Created

Viewing 14 posts - 1 through 14 (of 14 total)
  • Author
    Posts
  • #118655

    Thaly
    Member

    Could you send me a copy?
    [email protected]
    Thanks.

    0
    #117245

    Thaly
    Member

    I’m afraid your question is a very generalized one.
    If you want to do some forecasting, you could use some time series models. Or if you want to build a model that could predict an outcome based on the input variables there are a number of tools to do that. Regression is one, Neural networks is another, decision trees could be built. The model you need to choose depends on whether your data is continous, oridnate, categorical etc..ofcourse, my answer is very general but thats all I could glean from your statment.
    Try running searches on Data Mining. I assume you are a student and so you will have access to some data mining tools. Or if you think regression is fine, you could use Minitab.
    Hope this helps.

    0
    #117239

    Thaly
    Member

    I’m not sure if you could get Z from Ppk alone.
    May be you could try to transform your data and fit it to a normal distribution and see if it fits better.
    Do you know what shape parameter you used? If you could calculate mean,(and then from Ppk and spec limits, you could calculate sigma) then you could calculate your z value.
    do post it if you come across a better solution.

    0
    #117235

    Thaly
    Member

    Joseph,
    You need to know your mean and sigma (and your spec limits) to calculate z.z = (USL – mean)/ sigma.z = (mean – LSL)/ sigma.If you know your Ppk and any one of your variables (mean or sigma) then you could calculate the other and use those to calculate Z.
    Hope this helps.

    0
    #116770

    Thaly
    Member

    Are you convinced that any kind of PMs dont work for you? I don’t believe PFMEA is the right way to approach your problem, and I work in a very similar type of company as you. There are no documented benefits of using APQP as far as I know. I like to think of APQP as purchasing automobile insurance. If you have an expensive vehicle and you are an inexperienced driver, you’d purchase an expensive and comprehensive insurance. If you are an experienced driver and your vehicle is a 85 Honda, the state minimum shall suffice.
    1. PFMEA is supposed to be a live document. you will find it very hard to keep it so, especially in your case.
    2. PFMEA does very little to improve your process efficiently.
    3. It is expensive and requires a lot of focussed team work.
    4. PFMEA will only be as good as the knowledge of the most experienced team member.
    5. The only place you should use it is when you know that the part you are producing is a highly critical part and you can justify the cost of creating a FMEA and keeping it alive.

    0
    #115225

    Thaly
    Member

    Marzuki,
    I mistook your talking about Gage Master to be GRR. What you say about masters is right. Yes, your master sample must be fixed and you should know the dimensions of the master(or atleast the ones you are interested in), so that you could offset your gage by the specified value everytime. (If you are using a transducer equipped digital gage, you’d calibrate for the resolution first and then set the offset). Masters do not help to improve your GRR. All they do is tell your gage where the reference point is. But, if you are interested in knowing the accuracy of the gage, you’d ve to do a R&R study.
    Hope this helps.

    0
    #115219

    Thaly
    Member

    Your n is 4 and you’d have 4 different control charts for each of your PCBs. (This is if you are treating the individual PCBs as separate entities.)
    You do not have to compute the CPk each time, because you are already using a control chart, and if the points stay within the limits, you already know your process is in control.
    I think you probably’d be better off getting yourself a good primer on GR&R before attempting to do an actual study. We do need random samples to do a GRR. There is no such thing called “fixed samples” and human reproducibility is an entirely different topic!!

    0
    #114680

    Thaly
    Member

    The 6 sigma logic won’t apply to your scenario unless you know that the deaths are distributed normally (or atleast close). Usually such discrete events resemble more of a Poisson distribution (or other discrete distributions).
    If you have the right tools, you could build a neural network model (or a regression model) of your death/plane crash and predict your outcome.
    Thaly

    0
    #114293

    Thaly
    Member

    All of the answers you got are good. here is a little mathematical explanation.
    You calculate ndc as follows: (5.15*Sigma-Process)/(2.575*1.414*Sigma-meas)The numerator is fairly easy to understand. The denominator is basically the discriminatory power of the gage. Lets say you make 2 measurements X1 and X2. Now the absolute difference between X1 and X2 must be greater than the denominator for the gage to distinguish the two parts. But the denominator as such is useless (ofcourse assuming the resulting value to be > 1) since we dont know how bad the gage is until we relate it to the process variation. Hence the ndc concept.

    0
    #113655

    Thaly
    Member

    I understand you are doing a 100% inspection.
    One way to improve your gage R&R is by looking at the control limits of the R&R experiment and attack the points out of control due to operator error. (Reproducibility). see if you could get all the points within the control limits. This brings down the operator error part of your R&R. If you can’t simply discard those values and recalculate the Range.
    TO ANSWER YOUR QUESTION:
    If you dont want to do the above, here’s something you might like to do.
    %R&R of 19.6 means that your gaging system contributes to 19.6% of the sigma(total variation) and NOT your tolerance limits.
    So you might be better off shrinking your tolerance limits by 19.6% of the SIGMA and not that of the tolerance limit itself.
    Hope this helps.

    0
    #113507

    Thaly
    Member

    I agree with juggler. I’d recheck my numbers if I had a situation like that. The most likely situation is that the noise was very high during the experiment. Try doing more replicates or seprating the factor levels further.
    But to answer your question, you’d have to include all the non-significant main effects that contribute to the interaction efect (A,B,C = ABC)

    0
    #113506

    Thaly
    Member

    Ripley,
    Number of calls is discrete.
    Delivery time is continuous. Period.
    Like people said, you could try to fit your data and characterize them into a particular distribution.
    Thaly

    0
    #112019

    Thaly
    Member

    Sunil,
    Unless you want to include the suppliers of the 6 variety of inserts as a factor, it is not a nested ANOVA.
    You could just do a general full factorial design. One thing to note is that your experiment looks like it could be affected by not randomizing the experiment, since running that many parts might amplify a lurking variable and might influence the later part of the experiment. you’d be the best person to decide if you need to randomize the runs.
    It is best to replicate the experiment in a different order and compare the results. if there is not much difference, you could remove the block SS and then you would have values which would be a statistically more significant than one.
    Hope this helps

    0
    #111968

    Thaly
    Member

    Actually you can’t do that anymore on the latest version, Minitab 14. It renders itself unusable if you try to even change the date at any point during the 30 dey period trial.
    Thaly

    0
Viewing 14 posts - 1 through 14 (of 14 total)