iSixSigma

dangwal

Forum Replies Created

Forum Replies Created

Viewing 31 posts - 1 through 31 (of 31 total)
  • Author
    Posts
  • #182409

    dangwal
    Participant

    We had a similar issue with our semi tech process.
    We did a NVA analysis and were able to reduce the
    AHT by 20%.

    0
    #177043

    dangwal
    Participant

    Dear Ayesha,
    I came to know one of the organization which also helps you to get Black Belt from ASQ through their study material. This is not a promotion of the same. their e mail id is:[email protected]
    Their study material will give you immense knowledge about Six Sigma from groud level to higher grounds,
    Trust your potential thriust to become SSBB will be answered to this,
     
    Nitin

    0
    #59442

    dangwal
    Participant

    Hi Adam
    Please send me card game simulation to me alos
    [email protected]
    Thanks
    Nitin

    0
    #64943

    dangwal
    Participant

    Brandon,
    Thanks for the support, you can send the format / template to
    [email protected].
    Thank you !!

    0
    #64937

    dangwal
    Participant

    Guys, Appreciate if you can send me the 8D report format.
    Thank you !!

    0
    #165201

    dangwal
    Participant

    Pls send the video to [email protected].
    Thanks in advance.

    0
    #159866

    dangwal
    Participant

    You can do regression with predictors as months, but one of theassumptions for regression analysis is that the errors are statistically independent. You will violate this assumption if there is auto correlation. [Refer  article in Volume 18, Number 3 2006, Quality Engineering page 405].
     If you do regression, you have to do residual analysis including test for auto correlation; it will flag out problems.
    You can better set the regression as Auto Regressive Model with prediction of Y (t) as linear function of  [X (t-1) , Y (t-1), Y (t-2) .. Y (t-n) . But watch out for high Variance Inflation Factor (VIF) . Best thing to do is to use only uncorrelated predictors. Time Series analysis may be first thing you may want to do.
    -Nitin

    0
    #159768

    dangwal
    Participant

    Benchmark Sixsigma
    http://www.benchmarksixsigma.com

    0
    #158108

    dangwal
    Participant

    You may use multiple Regression, or if you are more comfortable with stats, use step by step regression

    0
    #157645

    dangwal
    Participant

    The process you have given is perfect.
    while caliberating, the bias (accuracy) needs to be determined at multiple points across the operating range of the instrument. if the bias is not statistically zero at all points, linearity calculations are done.
    if the instruments are are used for measurement in as caliberated condition (e.g. vernier calliper) , xbar-R for last few caliberations can be used to check the stability over time. if the measurement depends both on instrument and an external utility (e.g. air guage or dial-master combinations), the setup needs to be checked more regularly for bias. the plot of this bias over time as before can be used to determine stability.
    one of the good uses of stability analysis is to determine the resetting and caliberation frequency.
    Hope it helps
    Regards,
    Nitin

    0
    #157410

    dangwal
    Participant

    Like for most initiatives, metrics include:
    No. of projects completed in a year,
    % of people involved in projects,
    % of people trained etc.
    Regards,
    Nitin

    0
    #138486

    dangwal
    Participant

    Pradeep –
    95% Confidence Interval relates to 1.96sigma on each side of the mean (ref Z table).
    this 1.96 is at times rounded to 2 for ease of calculations.

    0
    #138315

    dangwal
    Participant

    Shiva –
    Looking at the Hypothesis that you want to test
    “if there was a difference between the measurements taken using the different callipers” ………
    You need to do paired t test

    0
    #138239

    dangwal
    Participant

    Shiva – p and q are probabilities of success and failures respectively in Bernaulli experiments.
    These are known before hand and hence the question when we must use p = 0.5 and q = 0.5 and when must we not use this values. is not correctly phrased one.

    0
    #138227

    dangwal
    Participant

    1. Process is reasonably centered (if you have 2 side spec)
    2. You have minimum 1s margin / buffer between process spread (3s) and nearest spec limit.

    0
    #137964

    dangwal
    Participant

    rajesh – what is UDTI?

    0
    #137960

    dangwal
    Participant

    Guys, can we have the capacility analysis using median and std deviation as we have for mean and std deviation.????

    0
    #137956

    dangwal
    Participant

    Brit, its agreeable that median represents better in non normal data, but aren’t we losing the focus on the standard deviation, for example in cycle time projects i always find 20% of my data causing the non normal data.If i address this with a median i can show the improvements with the help of the median but always that 20% of my data will be causing a pinch to the customer, as i am losing the focus on standard deviation.
    I am just thinking of is thr any way of combing the median and standard deviation to measure the baseline and is a good representation of the same.

    0
    #137807

    dangwal
    Participant

    David – Do a thorough study (3 operators, 10 parts, 3 readings per part per operator) if it is feasible in your set-up.
    In case there are no issues with reproducability (AV) it will reflect so in the analysis. Howver give it a chance, as HK rightly says that “foolproofing jig” may pertain to only correct orientation of measurement piece.
    Be cautious with part selection though.

    0
    #137762

    dangwal
    Participant

    David –
    My understanding of the situation described by you is that you need to go for Gage R&R for continuous data (I am presuming that mesurement output from the machine is continuous).
    As operator intervention seems to be limited in this case, hence reproducability (AV) will be negligible. Only source of variation in the measurement system would be repeatability (EV).
    Couple of issues to be resolved before you do R&R –
    1. validate gauge from bias, linearity and stability perspectives
    2. For R&R study part selection for the study is one of the most critical requirement as it can influence the final outcome to a large extent.
    Regards
    Nitin
     

    0
    #137705

    dangwal
    Participant

    Michael – in all probability this seems to be an examination question……..
    You can use binomial distribution in minitab calc menu to find cumulative prob for <2 (0 & 1) defects. Number of trials is 40 and probability of suuccess is 0.02.
     

    0
    #137703

    dangwal
    Participant

    Pradeep – my answer must have lacked clarity.
    Actully any process is created to cater to one or more specfic needs of the customer(s). effectiveness is a measure on extent to which process is able to cater to these needs.
    processes do not exist without customers, hence we can not say “what matters is how efficiently or effectivly we have delivered” unless we know to whom the process is delieveing.
    I think GE Called this outside-in approach. COPIS/SIPOC is some more jargon on this concept

    0
    #137701

    dangwal
    Participant

    Pradeep
    Increase in number of sales is not a metric, it is an improvement traget on metric – “number of sales”.
    “number of sales” for a constant number of sales resources committed into the process is an efficiency metric.
    effectiveness metric would need identification of customers (internal and external) for your process. For example if you consider fulfillment department as a customer to sales process then extent to which their requirements (again would need identification and definition) are achived by sales process can be an effectiveness metric.
    As regards to the four measurements you listed, appear to be effectiveness metrics , however you need to identify your customer(s) here, e.g. who gets adversely impacted in case missing % contarcts increases.
    nitin

    0
    #137698

    dangwal
    Participant

    Pradeep – It is generally needed for a process to have both efficiency and effectiveness metrics.
    Effiiciency metrics are from cost / resource optimization perspective (output /input) while effectiveness metrics check how effective the process was in meeting the customer (internal, external) requirements.
    This approach usually helps in striking a balance between cost of the process and extent to which csutomer requirements were achieved.
    Hope this helps, or you may take example of a specific process and we can discuss this further.
    Nitin

    0
    #137484

    dangwal
    Participant

    SM – Productivity is too large a project theme (Big Y). I am sure, there are multiple (and readily identifiable) factors that increase or decrease the productivity.
    I would suggest that you initiate multiple projects on these individual factors (small y’s). This will allow your team to focus on each ‘y’ in a more realistic fashion.
    Just my opinion
    Nitin

    0
    #137482

    dangwal
    Participant

    Arjun –
    There are a lot more X’s that your team can identify like “lead quality”, “sales pitch used”, “dialer settings” etc.
    However nothing drives conversion ratio better than a properly designed and executed “sales incentive” plan.
    Nitin

    0
    #118635

    dangwal
    Participant

    Pl. send me the file of your SS study. -Nitin

    0
    #88032

    dangwal
    Participant

    dear lewis,
    from what I see, the simplicity of the shainin tools makes it not very palatable to statisticians. however, these tools are fairly simple and cery effective. I have myself used them to sort out plenty of problems including problems that were upto 7 yr old and the companies had decided to live with them. there have been cases where, the SM was not the only technique used, but I prefer to first try SM. only when it fails to narrow down the suspects beyond a point, I use more statistcal tools.
    when u get the manual from sander, i would suggest u first try KT tools to narrow down the definition. and then use SM to trim it further. in case your prime suspect is more than 1 use further classical tools, else decide the action and judge its effectiveness by B vs C.
    I will not be able to say more in general. do let me know if there is a specific querry.
    regards,
    nitin

    0
    #87859

    dangwal
    Participant

    I use..
    Statistical Thinking: Improving Business Performanceby Roger Hoerl, Ronald D. Snee
    I think its good book.

    0
    #83882

    dangwal
    Participant

    Dear Rao,
    From ur letter, i felt the deepness of ur knowledge about six sigma. could u please send me ur email address. I want contact u.
    Thanks
    Nitin

    0
    #75152

    dangwal
    Participant

    Rakesh
    I feel you are feeling constrained in deploying SPC.
    You do not need to know historical standard deviation. All you need to know is what kind of changes you are trying to detect in your process average. An OC curve is what will be useful for you.
    You need to deploy group control charts in your situation.
    Nitin
     

    0
Viewing 31 posts - 1 through 31 (of 31 total)