iSixSigma

Sridhar

Forum Replies Created

Forum Replies Created

Viewing 85 posts - 1 through 85 (of 85 total)
  • Author
    Posts
  • #189531

    Sridhar
    Member

    Akbar,
    No problem, bur it is better to know the process areas. 6 Sigma is a vehicle which you can use to address the QPM, TS, CAR etc. process areas in CMMI. Ours is a CMMI L5 certified company and we are using 6 Sigma to address the issues. You can contact me at [email protected] net also.
    Sridhar

    0
    #189487

    Sridhar
    Member

    Hi,
    First look at your metrics that you are capturing regularly for your management reviews. Then, in principle any point going out of your baseline limits mandates a CAR to be raised to bring it back in control unless it is atributed to some special causes. These out of situation points are probable candiates for a 6 Sigma project.
    G.N.Sridhar

    0
    #181071

    Sridhar
    Member

    Hi,
    I will also be very glad to receive the material. Can you send me a copy to [email protected] please.
    Thanks
    Sridhar
    (from India)

    0
    #171980

    Sridhar
    Member

    Hi BC,
    Can you also forward me the ppt if it OK with you?
    Thanks
    Sridhar

    0
    #64902

    Sridhar
    Member

    Brandon,
    I would appreciate if you can also e-mail me the 8Ds samples, my e-mail: [email protected]
    Sridhar

    0
    #162509

    Sridhar
    Member

    I too in search of some tip on this subject. In our painting line we too face the Quality problem. How can we benchmark. Sheet mental component paint (Electrostatic method) thickness vary from set specification. Has any one having some example / data collection method. Regards,

    0
    #160776

    Sridhar
    Member

    Thanks for your comments. Regarding the problem, pls note paint more or less on the sheet metal component will not affected directly the function / performance. But the paint more may increase the cost.
    Please give some road map for doing DOE.
    Regards,
    Sridhar

    0
    #160726

    Sridhar
    Member

    Thanks for the clarifications / views _Mr.Annon / Mr.Shan. To give bit more information, we have a painting process (electrostatic powder painting_semi automatic line_manual touchup and auto spray powder) for sheet metal components and the specification minimum coat thickness 70 microns and maximum 110 microns. We monitor this specification random inspection in a day and process is going on. Some days the coat thickness goes up or down and further feedback instruction given to the powder sprayer to control. 
    Now i would like to know is it necessary to check the process capability for this process? Is it worth spending time, in that case how can i proceed? As you know in this process the paint thickness at various area various and can’t maintain say only 70 microns throughout area of the sheet.
    In that case how sampling / inspection data can be collected for calculating the process capability. For your information we have more variants in components run in a batch itself (More than 50’s of items having different profile)
    Please give your views regarding. If any one have sample excel data sheet done – similar please let me have.
    Appreciate your feedback.
    Regards,
    Sridhar 

    0
    #64345

    Sridhar
    Member

    Hi Joe Musallam ,
    Can you also mail me your report on the Hypothesis tests. It will be very much useful.
    Sorry….you can send me the file to [email protected]
    Thank you for the same in advance.
    Sridhar

    0
    #64344

    Sridhar
    Member

    Hi Joe,
    Can you also mail me your report on the Hypothesis tests. It will be very much useful.
    Thank you for the same in advance.
    Sridhar

    0
    #116909

    Sridhar
    Member

    Hi, Ken,
    I am a novice to six sigma but have been using control charts for quite some time. Let me attempt to understand your query.
    The very name of control chart implies it is for a process that is already in control. Having a specified tolerance limit from the customer (specification limits – SL), we can set an internal limit for our process parameter that is more stringent than the SL. Control limits (CL) are generated out of the natural variation in the process the output of which we track using the control charts. Tracking and comparing present performance using CL’s generated out of a standard or initial sample does not account for the present variation in the process.
    Any thoughts?
    -Sridhar
     

    0
    #94025

    Sridhar
    Member

    Hi Ravi:
    A slighly different twist. Both your definitions are acceptable…it all depends at what level you are doing the project at.  At a system level, a defect would be a failed product at the customer. When you do a CTQ flowdown, you could find that for a subsystem level project line-defects would be the defects you want to fix.
    I hope this helps
    Cheers
    Sridhar

    0
    #88320

    Sridhar
    Member

    Hi Mridul,
    Thanks for the reply. I feel U seem to have realized the benefit of using Six Sigma Tools for managing ERP Projects.
    Myself, KavSS and others would be very much interested to know from you of any Operational level case study or some notes of experience.
    Thanks and Regards
    Sridhar
     
     
     

    0
    #88094

    Sridhar
    Member

    Thanks Michelle for Your feedback.
    Regards
    Sridhar
     

    0
    #62986

    Sridhar
    Member

    Have you thought about signing up for formal training in Six Sigma? I think it almost impossible to do Six Sigma correctly without formal training. It can be done (just as if you did not achieve a bachelors degree and you work successfully as an engineer), but it is less likely.

    0
    #83844

    Sridhar
    Member

    i tend to agree with the original post in this thread. the problem lies in the variation associated with ASQ (no pun intended). some chapters are excellent, while others flounder. some articles in quality progress are helpful, while others are a waste of time. the article highlighted in this thread should not have been published (i totally agree), although others are very useful.
    how can ASQ become an organization with higher standards? i feel funny even asking this type of question, but where else can we discuss this? i don’t believe ASQ has ever asked me for my opinion, and yes, i am a member. what continuous improvement standards does ASQ follow in the performance of their daily duties and processes?
    S.V.

    0
    #77266

    Sridhar
    Member

    Hi Gabriel,
    How many data points needed in this case.
    Percentile                 85       90      95         98          99
    Actual(hr/pg)             0.61    0.67    0.90       1.10        1.15
    Fitted(hr/pg)              0.6      0.70    1.40       2.00        3.00
    The above table gives actual and fitted (log normal) values. by looking into the above table we decided as 85 the percentile as our UCL. What do you feel the UCL shoudl be.
    sridhar
     

    0
    #77245

    Sridhar
    Member

    Hi Gabriel,
    Thanks for the reply. The values i posted last time are the values of fitted distribution. The other thing is before fitting the distribution i removed all the outliers using box plots. No. of points i used are 72 data points. I checked for the random patterns alos, actually these values from different project obviously you wont find any pattern. Hope i answered all the points you asked for.
    thanks
    sridhar
     

    0
    #77231

    Sridhar
    Member

    Hi Gabriel
    Once agian thanks for the information. But i need one more clarification, Don’t you think that we also need to look into the actual values in these kind of distribution. Let me explain why, as my distribution is very skewd towards end, i am getting the values as follows.
    For 85th percentile i am gettin the value as 0.6 hrs per page
    For 90 th percentile as the frequency is very low as go towards right iam getting the value as 0.7hr per page
    Similary if you go further 95th percentile the value is 1.4 hr / page, for 98th percentile as 2.hr/page and 99 as 3hrs per page.
    Does this actual value hr/ page needs to be considered while setting these contro limits especially in non norma case
    thanks
    sridhar
     

    0
    #77230

    Sridhar
    Member

    Hi ted,
    The control chart on preperation time doesn’t say that you have to inspect within the bands. Yes you are right we have to keep control on the defects. One of the reason for having less or more no.of defects is because of time spent on inspection. THere may be many other reasons also like the persons who inspected does’nt have proper experience, the no. of people inspected, the person who had written the document is very new, no. of pages, complexity of document etc. SO what we are trying to say that if any thing goes out of control just look for the root cause and take correcteive action. The root cause may be any of the above mentioned reasons.
    thanks
     

    0
    #77209

    Sridhar
    Member

    Hi Gabriel,
    Thanks for the reply, i am exactly looking for this kind of reply. But as you said about control limits, my data is following log normal distribution whcih is very skewed on the right side, why we have to go for 99.8 percentile i think we are giving very liitle scope for out of control. What do you feel
     

    0
    #77202

    Sridhar
    Member

    Hi pooja,
    Today morning i have recieved a mail from QAI that they are going to conduct a two day workshop on Understanding six sigma. The schedule is as follows
    Hyderabad 24 th and 25th July 2002, Chennai – Aug27 and 28, Pune Aug 8 and 9.
    You can catch some more details at theri website.
    http://www.qaiindia.com
    regards
    sridhar
     

    0
    #77200

    Sridhar
    Member

    Hi bob,
    Thanks for the reply. I fully agree with what ever you mentioned but let me exaplin the problem once again. I am working in a software company where we want to introduce control charts for inspection process of documents. For each project i get only one document and this will be inspected once. What we did is we collected the data for the last one year for all the documents which are inspected and we find the distribution , the data is following log norma for a particular characterstic, now how do i calculate the control limits, do i need to transform the data to normal and calculate the control limits, i wont have any subgroups because all the software projects are different in one or other sence. What will be my options and which one will be better
    1. Tranform to normal and then calcualte the control limits
    2. I know the data is following lognormal then i can fit the distribution and calculated control limits as some percentiles based on the fitted and actual values.
    I am very sorry that i am repeating once again but it is very important for me to know which one i have to go for.
    thanks
    sridhar
     

    0
    #77190

    Sridhar
    Member

    Hi CT
    Thanks for the reply. Yes i need to have control charts let me explain why. We are inspecting a document, the no.of defects found will definitely depend on the time spent. Suppose our time spent goes out of control and our defect density of the document is within control, then we can find out the root cause so that the same cause cannot occur in the next project. Hope i explained properly.
     
    thanks
    sridhar
     

    0
    #77187

    Sridhar
    Member

    Hi mike,
    I know about the Central limit theorem, but my probelm is somewhat different. Let me give some more informaiton. I am trying to develop a control chart for effort spent per page of a document while inspection. Now i have only one point for each document. I cant have a sample of more than one point where i can get an average and can think of Central limit thereom.
    Can you say how to handle this situation, the other thing is checked in minitab that the data is follwoing log normal distribution.
    thanks
    sridhar
     

    0
    #77186

    Sridhar
    Member

    Hi mike,
    I know about the Central limit theorem, but my probelm is somewhat different. Let me give some more informaiton. I am trying to develop a control chart for effort spent per page of a document while inspection. Now i have only one point for each document. I cant have a sample of more than one point where i can get an average and can think of Central limit thereom.
    Can you say how to handle this situation, the other thing is checked in minitab that the data is follwoing log normal distribution.
    thanks
    sridhar
     

    0
    #77182

    Sridhar
    Member

    Hi Daren,
    You can campare two samples with different sample size also, But here you have to make an assumption about your population standard deviations. If you assume standard deviations are equal then you can calculate the pooled standard deviation.
    How to do this you get here in this website
    http://www.itl.nist.gov/div898/handbook/eda/section3/eda353.htm
    thanks
    A.Sridhar
     

    0
    #77181

    Sridhar
    Member

    Hi,
    If you more than 30 data points ,you can calculate the standard deviation as sqrt(p*(1-p)) ( based on central limiit therom defeinition). Rest of the calculations will remain same, i am not sure whether this method of calcuating Cp is correct or not.
    thanks
     

    0
    #77138

    Sridhar
    Member

    Hi,
    How many data points you have, if you have more than 30 points you may go for t – test, i am not sure whether this is correct or not.
    The other way you can use not parametric tests like Anderson Darling, Man whiteny etc. which doesn’t depend on the kind of distribution.
    thanks
    A.Sridhar
     

    0
    #77131

    Sridhar
    Member

    Hi rullen,
    Sample size can be calculate depending on trhee parameters
    1. Confidence 2. Margin of eror 3. Variance in case of continuous data proportiona of failure or acceptance in case of attribute this can be a guess from the past if you dont have any guess you can take it as 0.5 to get  max sample size.
    There is a formula for calculating sample size based on the above three factors.
    I can send you a presentation on sampling methods and sample size and an excel template for calculating the sample size, if you are interested contact me at
    [email protected]
    thanks
    sridhar
     

    0
    #77104

    Sridhar
    Member

    Hi,
    Why dont you try this online handbook , hope this is going to help you a lot.
    http://www.itl.nist.gov/div898/handbook/dtoc.htm
    thanks
    A.Sridhar
     

    0
    #76771

    Sridhar
    Member

    Hi nitin,
    I think you understood the definition of interaction wrongly .Interaction comes between factors not between levels. By considereing all 2*2 combinations we can find out is there any interaction between two factors not between any levels.
    if needs more clarification [email protected]
    regards
    sridhar

    0
    #76511

    Sridhar
    Member

    Hi,
    Once i did the similar kind of survey in our organization. During that time i tested for the significant differences between different plots. At that time i used ANOVA and also tested each pair of buildings independently using some non paramteric test.
    Disadvantage of using ANOVA is that it has an assumption that data should follow normal distribution, which is very difficult because our scale is limited.
    By using independet non parametric test it will reduce our confidence level.
    But i used both ANOVA and also tested with non paramtric tests also. I dont know whether it is right or not, but i got good results from these analysis.
    I used MINITAB for doing the above.
    Hope this is going to help u . If need any mroe information contact me at [email protected]
    thanks
    A.Sridhar
     

    0
    #75359

    Sridhar
    Member

    Hi,
             That depends on your alternate hypothesis, if your alternate hypothesis is of not equal kind then you have to take two tail test, or if your alternative hypothesis is of type one is greather or lessers than the other then you consider one tailed hypothesis.
    thanks
    A.Sridhar
     

    0
    #75217

    Sridhar
    Member

    Hi,
     one of the best ways of deploying voice of customer is through QFD (Quality Function Deployment).
    thnaks
     

    0
    #74982

    Sridhar
    Member

    Hi,
          I am not sure my answer is going to help you or not.
    What i feel is , first you develop a decision model like i will reject this bag if it is above USL or below LSL. Now develop a sample size depending on the attribute type of data. For this you need to assume the following
    1. Confidence level
    2. Proportion of bad or good ( take 0.5 for max sample size)
    3. Margin of error (in percentage)
       then you calculate the sample size with the above assumed parameters. Once the process gets matured, you will get the std deveiation then you can recalculate your sample size.
    hope this helps you.
    thanks
    A.Sridhar
     

    0
    #74673

    Sridhar
    Member

    Hi Camargo,
           Sample size can be determined based on the following three factros
    1. size of population
    2. margin of error
    3. probability of failure or sucess
        There is a simple formula to calculate the sample size. Send me your mail id i will send you one excel sheet which calculates the sample size if you give the paratmeters whatever required.
    thanks
    A.Sridhar
     

    0
    #73409

    Sridhar
    Member

    Hi wasim,
           Cpk is calculated as
             min { (USL – Average)/3*Sigma  , (Average – LSL)/3*Sigma}
    sridhar
     

    0
    #72746

    Sridhar
    Member

    Hi,
           Why dont you try with CUSUM control chart. This helps in finding very small shifts, which other control charts cant find. Need some more information mail me at
            [email protected]
    thanks
    sridhar
     

    0
    #72581

    Sridhar
    Member

    Hi,
             I have one with me. Mail me at [email protected]
    sridhar
     

    0
    #72236

    Sridhar
    Member

    Hi Erick,
           Thanks for the reply. Can you please give me your e – mail id , i will be in touch with you if i need any clarifications.
    thanks
    sridhar
     

    0
    #72193

    Sridhar
    Member

    Hi Erik,
         Tahnks for the reply. I did try this method, but  the problem is most of the cells i got or less thank 5 and i dont know what to do. Can you suggest what to od if i have celss less than 5.
          Another question is because i am having all the combinations can i apply ANOVA(DOE) to fine out which plot and which grade gives me optimal satisfaciton i.e what i mean to say that if i get plot and grade for which i have optimum response then i will provide the similar kind of facilities to other plots and other grades also.
            Can i think of such analysis if so what changes i have to make in the response column, because my response is of attribute type.
    thanks in advance
    sridhar
     
     

    0
    #72108

    Sridhar
    Member

    Hi,
           I dont think you have to wait for 220 tasks, because any way you are converting them into defects per million opportunities. If it is 220 or less it doesn’t matters, the final metric you are going to say is defects per million opportunities.
    sridhar
     

    0
    #71724

    Sridhar
    Member

    Hi chandra,
        Send me your mail id.
    sridhar
     

    0
    #71723

    Sridhar
    Member

    Hi,
           Lot of discussions took place in this forum on sample size. Why dont you make a serach by keyword. try this link.
    https://www.isixsigma.com/forum/showmessage.asp?messageID=8374
    regards
    sridhar
     

    0
    #71584

    Sridhar
    Member

    Hi,
         This 3.4 comes from the normal distribution curve. You check a stad normal distribution curve at 4.5 sigma value. Multiply the area outside this curve by one million you get 3.4. You may ask why we take 4.5 instead sigma.
        Then 1.5 sigma assumed as long term shift. That is the process may shift 1.5 std deviations in the long term. That is why we have considered 4.5 sigma giving 1.5 shift. If you consider actual 6 you get 2 parts per billion instead 3.4 parts per milllion.
    hope this helps you
    regards
    A.Sridhar
     

    0
    #71581

    Sridhar
    Member

    Hi,
          Probably you did n’t get the right concept of six sigma. Six sigam implies that you are able to incorporate 6 std deviations in between your target (center) to the closet specification limit. Now when you say 5 sigma implies you are able keep 5 stad deviations in between your target and closest specification. This implies that your std deviation is increased which in turn means more no. comes out of specificaitons. Hope this gives you some idea.
    regards
    sridhar
     

    0
    #71535

    Sridhar
    Member

    Hi,
           I think i will go with the Erick. For this kind of data it is better to do the contingency table approach. I am not sure whether i understand your question correct or not. I am trying to explain what i understood.
    the resutls are 
                                    Large deals                small deals         total
     Adam                         21(a)                          94(b)                     115
     Joan                           27(c)                          58(d)                     85
    Total                           48                              152                       200
    Null Hypothesis : P1 = P2
    Aleternat           : P1 P2
    predicted values are calculated as
             a = 48*115/200 = 27.6          b = 115*152/200 = 87.4
             c = 48*85/200 =  20.4           d = 152*85/200   = 64.6
    Chi square Ch(cal) = Sum (observed – predicted)^2 /  perdicted
                                 = 4.90
    Chi square Ch(critical for df = 1, alpha = 0.05) = 3.85
      Ch(cal) > Ch(critical) We reject the null hypothesis and conclude that the proportions of deals are not same for both adam and joan.
    hope this helps you
    regards
    sridhar
     
     

    0
    #71524

    Sridhar
    Member

    Hi,
         It is very simple but the only disadvantage is you know when you have around 10 things to prioritize you cant compare all the 10 at a time. Thats what we can avoid with avoid. It compares only pair wise and later on you can even check whether any bias is there in this scores. If so do it again until no bias. Yes it also have some disadvantages like if you have more in number to compare you need many more pair wise comparisions then it becomes cumbersome, but still it is useful.
    best regards
    sridhar
     

    0
    #71474

    Sridhar
    Member

    Hi Antony,
           can you send me your mail id, i will send you one paper and an excel sheet to do prioritization using AHP.
    sridhar
     

    0
    #71452

    Sridhar
    Member

    Hi,
              You have to take a transformations on the response data if it is of attribute type, depending on the distribution. May be in your case it may follows poission distribution. Take a transformation do the similar analysis as you will for continuous response.
          You can look into minitab for this. Are you can go through this website which have some questions and answers on DOE.
    http://www.minitab.com/support/answers/index.asp?topic=DOE
    hope this helps you
    sridhar
     

    0
    #71450

    Sridhar
    Member

    Hi,
            Try with AHP ( Analytic Hierarcy process ).
    sridhar
     

    0
    #71424

    Sridhar
    Member

    Hi,
          Sample size calculation depends on the following factor.
    1. Your confidence level
    2. Margin of error
    3. Std deviation of population ( in case of proportions it is sqrt(p*q)
         Once you assumed these values calcualte the sample size as
              n = Z^2 * std deviation / (error)^2
        The final decision on sample size depens on the response rate. You inflate this obtained sample size as per your response rate.
    hope this helps you
    sridhar
     

    0
    #71423

    Sridhar
    Member

    Hi,One way of prioritizing is AHP (Analytic Hierarcy Process). The concept is comparing at a time only two characterstics i.e. pair wise comparision. All the pair combinations are compared and given a score on a scale of 1 to 9. Once all the combinations are over then we standardize the scores get the most important ones. One more advantage of this method is you can even test that wether scores are given randomly or any bias was there. If any bias has occurred then you do the whole process agian and rank them.hope this helps you.sridhar Download: Analytic Hierarcy Process (AHP) Explanation (Microsoft Word) Download: Analytic Hierarcy Process (AHP) Template (Microsoft Excel)Viewing Tip: Usually, you can click on a link to view the document — it may open within your browser using the application (in this case either Word or Excel). If you are having difficulty, try right clicking the link and selecting “Save Target As…” or “Save As…” to save it to your computer harddrive.Virus Note: All files are scanned prior to uploading to iSixSigma. No prevention program is entirely safe. FOR YOUR OWN SAFETY, PLEASE:1) Re-scan downloaded files using your personal virus checker before using it.2) NEVER, EVER run compiled files (.exe’s, .ocx’s, .dll’s etc.). If you don’t have a virus scanner, you can get one at many places on the net including McAfee.com.

    0
    #71360

    Sridhar
    Member

    Hi,
            I think the simplest way of looking for outlier is Box-Plots. The points which are falling outside the whiskers of box plot are called as outliers.
    hope this helps you
    sridhar
     

    0
    #71326

    Sridhar
    Member

    Hi,
           This is just the manipualation of the formulas. I will prove the calculations.
          Cpk = min { USL-Average/ 3*sigma , Average – LSL / 3*sigma}
            Cp= (USL – LSL)/6*sigma
            sigma = (USL-LSL)/6*Cp
             Putting sigma in Cpk equation
    Cpk=2*Cp/(USL-LSL)*min{USL-Average, Average-LSL}
         = 2*Cp/(USL-LSL)*min(USL-T+T-Average,Average-T+T-LSL}
        where T = USL+LSL/2
      = 2*Cp/(USL-LSL)*min(USL-(USL+LSL)/2-(Averge-T), Similar}
     = Cp/(USL-LSL)*min(USL-LSL-2*(Averge-T), similar for this)
     = Cp*min{1-2*(Avergae-T)/(USL-LSL), Similar term}
    =Cp*(1-2*(average -T)/Tolerance)
    =Cp*(1-k)
    where k = 2*(Average – Tolerance center)/Tolerance width
    Hope can understand this calculation. If not send me your mailid.
     
    sridhar
     

    0
    #71288

    Sridhar
    Member

    Hi,
           Box cox method is used to make the non normal data to norma.
    You can go through this handbook
          http://www.itl.nist.gov/div898/handbook
    this gives you some information about the box cox method. After that you can use any statistical pakcage like MINI TAB etc which will do the transformation.
    thanks
    sridhar
     

    0
    #71269

    Sridhar
    Member

    Hi,
        Why dont you try for some transformations like Box – Cox. Generally this transformation makes the data closely to normal. Then you calculate your control limits and convert back to your original values.
    sridhar
     

    0
    #71268

    Sridhar
    Member

    Hi,
           The simple formula to calculate the sample size is
         n = Z^2 * p*(1-p)/(error)^2     for infinite or very large population
        If you have a finite population make the correction as below
            Sample size(for finite population ) = 1/{(1/n) + (1/N)}
            n= same as calculated by the above formula
              N = Population size
         Z = corresponding Normal value for confidence interval (assume Confiedence interval for example for 95% CI your Z = 1.96)
          p = Population proportions ( assume 0.5 as safer side)
          error = assume in % ( ex 5% etc)
    Hope this helps you
        If you need more clarifications pleas mail me at
          [email protected]
    thanks
    Sridhar
     

    0
    #71219

    Sridhar
    Member

    Hi,
           You can try QFD(quality function deployment), FMEA(failure mode effect and analysis).You can even try TRIZ(Theory of  Inventive problem solving). Now in new terms even you can think of Desing for six sigma (DFSS). 
    sridhar

    0
    #71218

    Sridhar
    Member

    Hi,
           Always use Cpk for six sigma calculation. When your process is at target then both Your Cp and Cpk are same. Cpk is always less than or equal to Cp. Use Cpk for six sigma calculation.
    thanks
    sridhar
     

    0
    #71073

    Sridhar
    Member

    Hi,
             Send me your e – mail i will send you two papers.
     
    bye
    sridhar
     

    0
    #70979

    Sridhar
    Member

    Hi ravi,
           Why dont you try this journal
         http://www.triz-journal.com/archives/
    hope this helps you in understanding TRIZ
    bye
    sridhar
     

    0
    #70950

    Sridhar
    Member

    Hi,
          Why dont you try this website. It willgive you  a very simple and elaborate explanation with examples.
    http://www.itl.nist.gov/div898/handbook
    Hope this may help you for better understanding.
    bye
    sridhar
     

    0
    #70905

    Sridhar
    Member

    Hi Ali,
             There is a web site of Inetnational Society for Six Sigma Professionals. ( http://www.isssp.org) You can go to this web site. THere you will get some of the six sigma affliates at resource centere. There it is written that
       House of Quality Business and Management Consulatation(Patkistani)
    I dont know whether they will conduct any trainings are not. But you may get some more informaton there.
    Hope this finds interest.
    sridhar

    0
    #70821

    Sridhar
    Member

    Hi ram,
            I send the case studies to your address given, but it bounced back. Give me the correct mail id i will send once agin.
    bye
    sridhar

    0
    #70820

    Sridhar
    Member

    Hi,
         We assumed 1.5 sigma shift in the process to include the longterm effect. Every process has a chance to have shift because of many reasons, like tool wearing, change in operators, change in environment so many reasons. You cant take six sigma project whenever the shift occurs because of lot of cost and effort inovolved. THat is why the 1.5 sigma shift assumed.
    hope this helps you
    thanks
    sridhar

    0
    #70800

    Sridhar
    Member

    Hi hoon,
           Cpk and Cp will be equal when your process centered. If your process is off centered then Cp will be higher than Cpk. Cp tell you that how much is your process capability. But Cpk tell you whether your process is shifted or not. The paragraph you mentioned says that keep your process at Cpk = 1.33 so that your process capability is 4 sigma when it is centerd, these one sigma extra is given to consider the shift in the process.
    hope this gives you some help.
    thanks
    ———————————————————————-

    0
    #70750

    Sridhar
    Member

    Hi akro,
             I have two papers on FMEA. One is a case study done at a Hospital i.e. how to improve the service in a restaurent in a hospital. Second one is a paper explaining about FMEA.
        If you are interested send me the e-mail i will send you the two papers.
    sridhar
     

    0
    #70720

    Sridhar
    Member

    Hi rajesh,
            Why dont you have a mean you. What ever the data points are there then definitely you have a mean. You will not have lower specification limit.
     You can calculate Cpk as (USL – mean / 3*Sigma)
    sridhar
    —————————————————————-

    0
    #70674

    Sridhar
    Member

    Hi suzanna,
               I will be very grateful if you send the report to my mail id
    [email protected]
     
    thanks
    sridhar

    0
    #70548

    Sridhar
    Member

    Hi stan,
            If the target is centered then both Cp and Cpk equal is int it right?
    ———————————————————————
    sridhar

    0
    #70477

    Sridhar
    Member

    Hi,
              Predictive modelling is nothing but you have to predict some response variables by using certain dependent parameters. I am in software so i will give you example with software process only.
    Suppose take the inspection process in software. We will inspect different artifacts ( documents ) and prepared a defect log. These no.of defects may depends on many factors like size of the artifact, complexity, No.of persons inspected, experience of the inspectiors, effort spent etc. Now we will fit a model depending on the past data what should be the no.of defects for a given parameters. THis is predicted value now when you do the actual inspection we will get some other value. You can take decision whether to go for one more inspection or not.
    hope this helps you.
    thanks
    —————————————————————-

    0
    #70268

    Sridhar
    Member

    Hi,
         I know that if your process is at six sigma your Cp is 2. It is calculated as follows.
    basically Cp = (USL-LSL)/ 6*stddev. When your process is at six sigma level it means that you can incorporate 12 stddev within USL and LSL
    i.e Cp = 12 * Stddev / 6* stddev = 2
    Cpk = 4.5*stddev / 3 * stddev = 1.5 ( taken 1.5 shift on one side )
    may be you can extend this idea to other sigma levels also.
    hope this helps you
    thanks
    ——————————-

    0
    #70232

    Sridhar
    Member

    Hi,
            Please send me your mail id. I am having two case studies on Prioritizing software requirements which used AHP ( ANalytic Heirarcy process ) . It is having good explnaiton about the process also.Send me your mail id i will send you the papers.
    bye
    ———–

    0
    #70231

    Sridhar
    Member

    Hi,
          Normally the Process Capability should be as minimun as 1.33. IF it is more then it is well and good.
     
    ——————–

    0
    #70230

    Sridhar
    Member

    Hi,
           Coeffecient of varation helps you to find out whether you want to consider one sigam or 3 sigma to develop control charts. THis is only recommendation. When your sigma is very high if you take 3 sigma values almost every point will fall in within control because of large distance between control limits and centre value. To avoid this we check the Coeffecient of variation and decide upon the control limits.
    Hope this gives some idea
    bye

    0
    #70229

    Sridhar
    Member

    Hi,
          Taguchi loss function says that if you go away from your target values your cost increases. Same as six sigma it says that try to be centered at target and have less variation so that you can make only 3.4 defects per million. To attain six sigma you can use any method or technique etc. One of those techniques is DOE.
    bye
     

    0
    #70114

    Sridhar
    Member

    Hi,
    I am not experienced this kind of situation but i try to answer it within my knowledge. Even the machines are same there may be some variation in the products what you are making may be becuase of operators, m/c parameters setting etc. So what i feel is try to do some hypothesis testing on the 12 machines i.e. sample mean and variance of all the 12 manchies are same or not. If they are same then you can take the same control limts. If not try different control limits for those machines.hope this gives you some way to find a solution…….sridhar

    0
    #70108

    Sridhar
    Member

    Hi ravi,
            You are right. But what i mean to say that. You should take care that your control limits should always be less than or equal to Specificaiton limts to assure that your process is within control.
     
    sridhar
    ————–

    0
    #70088

    Sridhar
    Member

    Hi
           Specification Limits are set by the customers only. We will set our Control limits so that finally we achive the customer specifications limits. Control limits comes from the natural vaiation within the process. Always control limits are less than or equal to specifications limits.
    bye
     

    0
    #69806

    Sridhar
    Member

    Hi,
            When you say that your process is at 3 sigma or six sigma you will say it with respect to specification limits. Now when your process is at three sigma that implies that you are able to incorporate 3 + 3 sigma variation with in the specifications. When you say it is six sigma that imples that you are able to icorporate 6 + 6 sigma within the same specifications which means that your sigma is reduced. WHich intrun reduces the no.of defects.
    Hope this will help you
    thanks
    sridhar

    0
    #69570

    Sridhar
    Member

    hi,
         Basically R chart depends on the extreme Values in the data. becaue range = Min – Max.
         S charts consider the Variation in the process
          I feel that if you want to findout smaller shifts and you have enough sample size go for S – charts. If you are not interested in smaller shifts in the process then you can go for R chart
    bye
    sridhar

    0
    #69569

    Sridhar
    Member

    hi ,
        I dont know whether it is correct or not but i am trying to answer this.
       We know that if X ~ N ( mu, sigma) then X- bar ~ ( mu, Sigma / Sqrt(n)). By using this lets calculated the numericals
       X ~ N( 100, 12)
       n=36
      X-bar ~ N ( 100, 12/Sqrt(36)) = N(100,2)
      P(X-bar>104) = P((X-bar – mu )/ sigma > (104 – 100) / 2)
      P( Z > 2 ) = 0.022
       Probability of Sample average greater than 104 is 0.022
     Hope this helps you
    bye
    sridhar
     

    0
    #69483

    Sridhar
    Member

    Hi,
           I am not that much experience in SPC but i am having theoritical knowledge.  For the fist three ( a,b,c) you can use X bar R ( sample size is more than 2 ), or Individual Moving Range chart ( sample size 1). If you want to detect small shifts then go for CUSUM charts.
          For the other three as they are attributes you can go for u, c, p or np chart . If the data is following binomial go for Np ( fixed sample ) or P( variable sample size ), If the data is following Pission go for U( variable sample size ), or C ( constant sample).
         Hope this will hesp you.
    bye
    sridhar

    0
Viewing 85 posts - 1 through 85 (of 85 total)