Sridhar
@SridharMember since April 25, 2006
was active Not recently activeForum Replies Created
Forum Replies Created

AuthorPosts

February 19, 2010 at 4:44 am #189531
Akbar,
No problem, bur it is better to know the process areas. 6 Sigma is a vehicle which you can use to address the QPM, TS, CAR etc. process areas in CMMI. Ours is a CMMI L5 certified company and we are using 6 Sigma to address the issues. You can contact me at [email protected] net also.
Sridhar0February 18, 2010 at 8:35 am #189487Hi,
First look at your metrics that you are capturing regularly for your management reviews. Then, in principle any point going out of your baseline limits mandates a CAR to be raised to bring it back in control unless it is atributed to some special causes. These out of situation points are probable candiates for a 6 Sigma project.
G.N.Sridhar0February 12, 2009 at 6:58 am #181071Hi,
I will also be very glad to receive the material. Can you send me a copy to [email protected] please.
Thanks
Sridhar
(from India)0May 15, 2008 at 6:25 am #171980Hi BC,
Can you also forward me the ppt if it OK with you?
Thanks
Sridhar0December 5, 2007 at 7:24 am #64902Brandon,
I would appreciate if you can also email me the 8Ds samples, my email: [email protected]
Sridhar0October 4, 2007 at 6:07 am #162509I too in search of some tip on this subject. In our painting line we too face the Quality problem. How can we benchmark. Sheet mental component paint (Electrostatic method) thickness vary from set specification. Has any one having some example / data collection method. Regards,
0September 5, 2007 at 6:28 am #160776Thanks for your comments. Regarding the problem, pls note paint more or less on the sheet metal component will not affected directly the function / performance. But the paint more may increase the cost.
Please give some road map for doing DOE.
Regards,
Sridhar0September 4, 2007 at 5:31 am #160726Thanks for the clarifications / views _Mr.Annon / Mr.Shan. To give bit more information, we have a painting process (electrostatic powder painting_semi automatic line_manual touchup and auto spray powder) for sheet metal components and the specification minimum coat thickness 70 microns and maximum 110 microns. We monitor this specification random inspection in a day and process is going on. Some days the coat thickness goes up or down and further feedback instruction given to the powder sprayer to control.
Now i would like to know is it necessary to check the process capability for this process? Is it worth spending time, in that case how can i proceed? As you know in this process the paint thickness at various area various and can’t maintain say only 70 microns throughout area of the sheet.
In that case how sampling / inspection data can be collected for calculating the process capability. For your information we have more variants in components run in a batch itself (More than 50’s of items having different profile)
Please give your views regarding. If any one have sample excel data sheet done – similar please let me have.
Appreciate your feedback.
Regards,
Sridhar0June 7, 2006 at 12:28 pm #64345Hi Joe Musallam ,
Can you also mail me your report on the Hypothesis tests. It will be very much useful.
Sorry….you can send me the file to [email protected]
Thank you for the same in advance.
Sridhar0June 7, 2006 at 12:23 pm #64344Hi Joe,
Can you also mail me your report on the Hypothesis tests. It will be very much useful.
Thank you for the same in advance.
Sridhar0March 28, 2005 at 8:12 am #116909Hi, Ken,
I am a novice to six sigma but have been using control charts for quite some time. Let me attempt to understand your query.
The very name of control chart implies it is for a process that is already in control. Having a specified tolerance limit from the customer (specification limits – SL), we can set an internal limit for our process parameter that is more stringent than the SL. Control limits (CL) are generated out of the natural variation in the process the output of which we track using the control charts. Tracking and comparing present performance using CL’s generated out of a standard or initial sample does not account for the present variation in the process.
Any thoughts?
Sridhar
0January 8, 2004 at 7:48 am #94025Hi Ravi:
A slighly different twist. Both your definitions are acceptable…it all depends at what level you are doing the project at. At a system level, a defect would be a failed product at the customer. When you do a CTQ flowdown, you could find that for a subsystem level project linedefects would be the defects you want to fix.
I hope this helps
Cheers
Sridhar0July 25, 2003 at 5:05 am #88320Hi Mridul,
Thanks for the reply. I feel U seem to have realized the benefit of using Six Sigma Tools for managing ERP Projects.
Myself, KavSS and others would be very much interested to know from you of any Operational level case study or some notes of experience.
Thanks and Regards
Sridhar
0July 18, 2003 at 4:33 am #88094Thanks Michelle for Your feedback.
Regards
Sridhar
0July 4, 2003 at 12:06 pm #62986Have you thought about signing up for formal training in Six Sigma? I think it almost impossible to do Six Sigma correctly without formal training. It can be done (just as if you did not achieve a bachelors degree and you work successfully as an engineer), but it is less likely.
0March 14, 2003 at 3:11 pm #83844i tend to agree with the original post in this thread. the problem lies in the variation associated with ASQ (no pun intended). some chapters are excellent, while others flounder. some articles in quality progress are helpful, while others are a waste of time. the article highlighted in this thread should not have been published (i totally agree), although others are very useful.
how can ASQ become an organization with higher standards? i feel funny even asking this type of question, but where else can we discuss this? i don’t believe ASQ has ever asked me for my opinion, and yes, i am a member. what continuous improvement standards does ASQ follow in the performance of their daily duties and processes?
S.V.0July 16, 2002 at 3:16 am #77266Hi Gabriel,
How many data points needed in this case.
Percentile 85 90 95 98 99
Actual(hr/pg) 0.61 0.67 0.90 1.10 1.15
Fitted(hr/pg) 0.6 0.70 1.40 2.00 3.00
The above table gives actual and fitted (log normal) values. by looking into the above table we decided as 85 the percentile as our UCL. What do you feel the UCL shoudl be.
sridhar
0July 15, 2002 at 12:59 pm #77245Hi Gabriel,
Thanks for the reply. The values i posted last time are the values of fitted distribution. The other thing is before fitting the distribution i removed all the outliers using box plots. No. of points i used are 72 data points. I checked for the random patterns alos, actually these values from different project obviously you wont find any pattern. Hope i answered all the points you asked for.
thanks
sridhar
0July 13, 2002 at 5:05 am #77231Hi Gabriel
Once agian thanks for the information. But i need one more clarification, Don’t you think that we also need to look into the actual values in these kind of distribution. Let me explain why, as my distribution is very skewd towards end, i am getting the values as follows.
For 85th percentile i am gettin the value as 0.6 hrs per page
For 90 th percentile as the frequency is very low as go towards right iam getting the value as 0.7hr per page
Similary if you go further 95th percentile the value is 1.4 hr / page, for 98th percentile as 2.hr/page and 99 as 3hrs per page.
Does this actual value hr/ page needs to be considered while setting these contro limits especially in non norma case
thanks
sridhar
0July 13, 2002 at 4:53 am #77230Hi ted,
The control chart on preperation time doesn’t say that you have to inspect within the bands. Yes you are right we have to keep control on the defects. One of the reason for having less or more no.of defects is because of time spent on inspection. THere may be many other reasons also like the persons who inspected does’nt have proper experience, the no. of people inspected, the person who had written the document is very new, no. of pages, complexity of document etc. SO what we are trying to say that if any thing goes out of control just look for the root cause and take correcteive action. The root cause may be any of the above mentioned reasons.
thanks
0July 12, 2002 at 2:54 pm #77209Hi Gabriel,
Thanks for the reply, i am exactly looking for this kind of reply. But as you said about control limits, my data is following log normal distribution whcih is very skewed on the right side, why we have to go for 99.8 percentile i think we are giving very liitle scope for out of control. What do you feel
0July 12, 2002 at 1:48 pm #77202Hi pooja,
Today morning i have recieved a mail from QAI that they are going to conduct a two day workshop on Understanding six sigma. The schedule is as follows
Hyderabad 24 th and 25th July 2002, Chennai – Aug27 and 28, Pune Aug 8 and 9.
You can catch some more details at theri website.
http://www.qaiindia.com
regards
sridhar
0July 12, 2002 at 1:33 pm #77200Hi bob,
Thanks for the reply. I fully agree with what ever you mentioned but let me exaplin the problem once again. I am working in a software company where we want to introduce control charts for inspection process of documents. For each project i get only one document and this will be inspected once. What we did is we collected the data for the last one year for all the documents which are inspected and we find the distribution , the data is following log norma for a particular characterstic, now how do i calculate the control limits, do i need to transform the data to normal and calculate the control limits, i wont have any subgroups because all the software projects are different in one or other sence. What will be my options and which one will be better
1. Tranform to normal and then calcualte the control limits
2. I know the data is following lognormal then i can fit the distribution and calculated control limits as some percentiles based on the fitted and actual values.
I am very sorry that i am repeating once again but it is very important for me to know which one i have to go for.
thanks
sridhar
0July 12, 2002 at 8:52 am #77190Hi CT
Thanks for the reply. Yes i need to have control charts let me explain why. We are inspecting a document, the no.of defects found will definitely depend on the time spent. Suppose our time spent goes out of control and our defect density of the document is within control, then we can find out the root cause so that the same cause cannot occur in the next project. Hope i explained properly.
thanks
sridhar
0July 12, 2002 at 7:24 am #77187Hi mike,
I know about the Central limit theorem, but my probelm is somewhat different. Let me give some more informaiton. I am trying to develop a control chart for effort spent per page of a document while inspection. Now i have only one point for each document. I cant have a sample of more than one point where i can get an average and can think of Central limit thereom.
Can you say how to handle this situation, the other thing is checked in minitab that the data is follwoing log normal distribution.
thanks
sridhar
0July 12, 2002 at 7:21 am #77186Hi mike,
I know about the Central limit theorem, but my probelm is somewhat different. Let me give some more informaiton. I am trying to develop a control chart for effort spent per page of a document while inspection. Now i have only one point for each document. I cant have a sample of more than one point where i can get an average and can think of Central limit thereom.
Can you say how to handle this situation, the other thing is checked in minitab that the data is follwoing log normal distribution.
thanks
sridhar
0July 12, 2002 at 4:42 am #77182Hi Daren,
You can campare two samples with different sample size also, But here you have to make an assumption about your population standard deviations. If you assume standard deviations are equal then you can calculate the pooled standard deviation.
How to do this you get here in this website
http://www.itl.nist.gov/div898/handbook/eda/section3/eda353.htm
thanks
A.Sridhar
0July 12, 2002 at 4:13 am #77181Hi,
If you more than 30 data points ,you can calculate the standard deviation as sqrt(p*(1p)) ( based on central limiit therom defeinition). Rest of the calculations will remain same, i am not sure whether this method of calcuating Cp is correct or not.
thanks
0July 11, 2002 at 11:45 am #77138Hi,
How many data points you have, if you have more than 30 points you may go for t – test, i am not sure whether this is correct or not.
The other way you can use not parametric tests like Anderson Darling, Man whiteny etc. which doesn’t depend on the kind of distribution.
thanks
A.Sridhar
0July 11, 2002 at 7:34 am #77131Hi rullen,
Sample size can be calculate depending on trhee parameters
1. Confidence 2. Margin of eror 3. Variance in case of continuous data proportiona of failure or acceptance in case of attribute this can be a guess from the past if you dont have any guess you can take it as 0.5 to get max sample size.
There is a formula for calculating sample size based on the above three factors.
I can send you a presentation on sampling methods and sample size and an excel template for calculating the sample size, if you are interested contact me at
[email protected]
thanks
sridhar
0July 10, 2002 at 1:44 pm #77104Hi,
Why dont you try this online handbook , hope this is going to help you a lot.
http://www.itl.nist.gov/div898/handbook/dtoc.htm
thanks
A.Sridhar
0July 1, 2002 at 8:38 am #76771Hi nitin,
I think you understood the definition of interaction wrongly .Interaction comes between factors not between levels. By considereing all 2*2 combinations we can find out is there any interaction between two factors not between any levels.
if needs more clarification [email protected]
regards
sridhar0June 19, 2002 at 1:31 pm #76511Hi,
Once i did the similar kind of survey in our organization. During that time i tested for the significant differences between different plots. At that time i used ANOVA and also tested each pair of buildings independently using some non paramteric test.
Disadvantage of using ANOVA is that it has an assumption that data should follow normal distribution, which is very difficult because our scale is limited.
By using independet non parametric test it will reduce our confidence level.
But i used both ANOVA and also tested with non paramtric tests also. I dont know whether it is right or not, but i got good results from these analysis.
I used MINITAB for doing the above.
Hope this is going to help u . If need any mroe information contact me at [email protected]
thanks
A.Sridhar
0May 10, 2002 at 3:14 am #75359Hi,
That depends on your alternate hypothesis, if your alternate hypothesis is of not equal kind then you have to take two tail test, or if your alternative hypothesis is of type one is greather or lessers than the other then you consider one tailed hypothesis.
thanks
A.Sridhar
0May 7, 2002 at 3:03 am #75217Hi,
one of the best ways of deploying voice of customer is through QFD (Quality Function Deployment).
thnaks
0April 30, 2002 at 4:12 am #74982Hi,
I am not sure my answer is going to help you or not.
What i feel is , first you develop a decision model like i will reject this bag if it is above USL or below LSL. Now develop a sample size depending on the attribute type of data. For this you need to assume the following
1. Confidence level
2. Proportion of bad or good ( take 0.5 for max sample size)
3. Margin of error (in percentage)
then you calculate the sample size with the above assumed parameters. Once the process gets matured, you will get the std deveiation then you can recalculate your sample size.
hope this helps you.
thanks
A.Sridhar
0April 23, 2002 at 12:13 pm #74673Hi Camargo,
Sample size can be determined based on the following three factros
1. size of population
2. margin of error
3. probability of failure or sucess
There is a simple formula to calculate the sample size. Send me your mail id i will send you one excel sheet which calculates the sample size if you give the paratmeters whatever required.
thanks
A.Sridhar
0March 20, 2002 at 3:10 am #73409Hi wasim,
Cpk is calculated as
min { (USL – Average)/3*Sigma , (Average – LSL)/3*Sigma}
sridhar
0March 4, 2002 at 3:52 am #72746Hi,
Why dont you try with CUSUM control chart. This helps in finding very small shifts, which other control charts cant find. Need some more information mail me at
[email protected]
thanks
sridhar
0February 28, 2002 at 4:07 am #72581Hi,
I have one with me. Mail me at [email protected]
sridhar
0February 16, 2002 at 7:04 am #72236Hi Erick,
Thanks for the reply. Can you please give me your e – mail id , i will be in touch with you if i need any clarifications.
thanks
sridhar
0February 15, 2002 at 2:59 am #72193Hi Erik,
Tahnks for the reply. I did try this method, but the problem is most of the cells i got or less thank 5 and i dont know what to do. Can you suggest what to od if i have celss less than 5.
Another question is because i am having all the combinations can i apply ANOVA(DOE) to fine out which plot and which grade gives me optimal satisfaciton i.e what i mean to say that if i get plot and grade for which i have optimum response then i will provide the similar kind of facilities to other plots and other grades also.
Can i think of such analysis if so what changes i have to make in the response column, because my response is of attribute type.
thanks in advance
sridhar
0February 13, 2002 at 12:38 pm #72108Hi,
I dont think you have to wait for 220 tasks, because any way you are converting them into defects per million opportunities. If it is 220 or less it doesn’t matters, the final metric you are going to say is defects per million opportunities.
sridhar
0February 1, 2002 at 3:04 am #71724Hi chandra,
Send me your mail id.
sridhar
0February 1, 2002 at 3:00 am #71723Hi,
Lot of discussions took place in this forum on sample size. Why dont you make a serach by keyword. try this link.
https://www.isixsigma.com/forum/showmessage.asp?messageID=8374
regards
sridhar
0January 27, 2002 at 7:17 am #71584Hi,
This 3.4 comes from the normal distribution curve. You check a stad normal distribution curve at 4.5 sigma value. Multiply the area outside this curve by one million you get 3.4. You may ask why we take 4.5 instead sigma.
Then 1.5 sigma assumed as long term shift. That is the process may shift 1.5 std deviations in the long term. That is why we have considered 4.5 sigma giving 1.5 shift. If you consider actual 6 you get 2 parts per billion instead 3.4 parts per milllion.
hope this helps you
regards
A.Sridhar
0January 27, 2002 at 4:49 am #71581Hi,
Probably you did n’t get the right concept of six sigma. Six sigam implies that you are able to incorporate 6 std deviations in between your target (center) to the closet specification limit. Now when you say 5 sigma implies you are able keep 5 stad deviations in between your target and closest specification. This implies that your std deviation is increased which in turn means more no. comes out of specificaitons. Hope this gives you some idea.
regards
sridhar
0January 25, 2002 at 1:36 pm #71535Hi,
I think i will go with the Erick. For this kind of data it is better to do the contingency table approach. I am not sure whether i understand your question correct or not. I am trying to explain what i understood.
the resutls are
Large deals small deals total
Adam 21(a) 94(b) 115
Joan 27(c) 58(d) 85
Total 48 152 200
Null Hypothesis : P1 = P2
Aleternat : P1 P2
predicted values are calculated as
a = 48*115/200 = 27.6 b = 115*152/200 = 87.4
c = 48*85/200 = 20.4 d = 152*85/200 = 64.6
Chi square Ch(cal) = Sum (observed – predicted)^2 / perdicted
= 4.90
Chi square Ch(critical for df = 1, alpha = 0.05) = 3.85
Ch(cal) > Ch(critical) We reject the null hypothesis and conclude that the proportions of deals are not same for both adam and joan.
hope this helps you
regards
sridhar
0January 25, 2002 at 3:17 am #71524Hi,
It is very simple but the only disadvantage is you know when you have around 10 things to prioritize you cant compare all the 10 at a time. Thats what we can avoid with avoid. It compares only pair wise and later on you can even check whether any bias is there in this scores. If so do it again until no bias. Yes it also have some disadvantages like if you have more in number to compare you need many more pair wise comparisions then it becomes cumbersome, but still it is useful.
best regards
sridhar
0January 24, 2002 at 2:00 pm #71474Hi Antony,
can you send me your mail id, i will send you one paper and an excel sheet to do prioritization using AHP.
sridhar
0January 24, 2002 at 3:27 am #71452Hi,
You have to take a transformations on the response data if it is of attribute type, depending on the distribution. May be in your case it may follows poission distribution. Take a transformation do the similar analysis as you will for continuous response.
You can look into minitab for this. Are you can go through this website which have some questions and answers on DOE.
http://www.minitab.com/support/answers/index.asp?topic=DOE
hope this helps you
sridhar
0January 24, 2002 at 3:07 am #71450Hi,
Try with AHP ( Analytic Hierarcy process ).
sridhar
0January 23, 2002 at 3:18 am #71424Hi,
Sample size calculation depends on the following factor.
1. Your confidence level
2. Margin of error
3. Std deviation of population ( in case of proportions it is sqrt(p*q)
Once you assumed these values calcualte the sample size as
n = Z^2 * std deviation / (error)^2
The final decision on sample size depens on the response rate. You inflate this obtained sample size as per your response rate.
hope this helps you
sridhar
0January 23, 2002 at 3:14 am #71423Hi,One way of prioritizing is AHP (Analytic Hierarcy Process). The concept is comparing at a time only two characterstics i.e. pair wise comparision. All the pair combinations are compared and given a score on a scale of 1 to 9. Once all the combinations are over then we standardize the scores get the most important ones. One more advantage of this method is you can even test that wether scores are given randomly or any bias was there. If any bias has occurred then you do the whole process agian and rank them.hope this helps you.sridhar Download: Analytic Hierarcy Process (AHP) Explanation (Microsoft Word) Download: Analytic Hierarcy Process (AHP) Template (Microsoft Excel)Viewing Tip: Usually, you can click on a link to view the document — it may open within your browser using the application (in this case either Word or Excel). If you are having difficulty, try right clicking the link and selecting “Save Target As…” or “Save As…” to save it to your computer harddrive.Virus Note: All files are scanned prior to uploading to iSixSigma. No prevention program is entirely safe. FOR YOUR OWN SAFETY, PLEASE:1) Rescan downloaded files using your personal virus checker before using it.2) NEVER, EVER run compiled files (.exe’s, .ocx’s, .dll’s etc.). If you don’t have a virus scanner, you can get one at many places on the net including McAfee.com.
0January 21, 2002 at 3:35 am #71360January 18, 2002 at 4:52 am #71326Hi,
This is just the manipualation of the formulas. I will prove the calculations.
Cpk = min { USLAverage/ 3*sigma , Average – LSL / 3*sigma}
Cp= (USL – LSL)/6*sigma
sigma = (USLLSL)/6*Cp
Putting sigma in Cpk equation
Cpk=2*Cp/(USLLSL)*min{USLAverage, AverageLSL}
= 2*Cp/(USLLSL)*min(USLT+TAverage,AverageT+TLSL}
where T = USL+LSL/2
= 2*Cp/(USLLSL)*min(USL(USL+LSL)/2(AvergeT), Similar}
= Cp/(USLLSL)*min(USLLSL2*(AvergeT), similar for this)
= Cp*min{12*(AvergaeT)/(USLLSL), Similar term}
=Cp*(12*(average T)/Tolerance)
=Cp*(1k)
where k = 2*(Average – Tolerance center)/Tolerance width
Hope can understand this calculation. If not send me your mailid.
sridhar
0January 17, 2002 at 3:50 am #71288Hi,
Box cox method is used to make the non normal data to norma.
You can go through this handbook
http://www.itl.nist.gov/div898/handbook
this gives you some information about the box cox method. After that you can use any statistical pakcage like MINI TAB etc which will do the transformation.
thanks
sridhar
0January 16, 2002 at 3:51 am #71269Hi,
Why dont you try for some transformations like Box – Cox. Generally this transformation makes the data closely to normal. Then you calculate your control limits and convert back to your original values.
sridhar
0January 16, 2002 at 3:38 am #71268Hi,
The simple formula to calculate the sample size is
n = Z^2 * p*(1p)/(error)^2 for infinite or very large population
If you have a finite population make the correction as below
Sample size(for finite population ) = 1/{(1/n) + (1/N)}
n= same as calculated by the above formula
N = Population size
Z = corresponding Normal value for confidence interval (assume Confiedence interval for example for 95% CI your Z = 1.96)
p = Population proportions ( assume 0.5 as safer side)
error = assume in % ( ex 5% etc)
Hope this helps you
If you need more clarifications pleas mail me at
[email protected]
thanks
Sridhar
0January 14, 2002 at 3:26 am #71219January 14, 2002 at 3:17 am #71218Hi,
Always use Cpk for six sigma calculation. When your process is at target then both Your Cp and Cpk are same. Cpk is always less than or equal to Cp. Use Cpk for six sigma calculation.
thanks
sridhar
0January 9, 2002 at 2:58 am #71073Hi,
Send me your e – mail i will send you two papers.
bye
sridhar
0January 4, 2002 at 11:56 am #70979Hi ravi,
Why dont you try this journal
http://www.trizjournal.com/archives/
hope this helps you in understanding TRIZ
bye
sridhar
0January 3, 2002 at 1:05 pm #70950Hi,
Why dont you try this website. It willgive you a very simple and elaborate explanation with examples.
http://www.itl.nist.gov/div898/handbook
Hope this may help you for better understanding.
bye
sridhar
0December 29, 2001 at 3:56 am #70905Hi Ali,
There is a web site of Inetnational Society for Six Sigma Professionals. ( http://www.isssp.org) You can go to this web site. THere you will get some of the six sigma affliates at resource centere. There it is written that
House of Quality Business and Management Consulatation(Patkistani)
I dont know whether they will conduct any trainings are not. But you may get some more informaton there.
Hope this finds interest.
sridhar0December 26, 2001 at 3:34 am #70821Hi ram,
I send the case studies to your address given, but it bounced back. Give me the correct mail id i will send once agin.
bye
sridhar0December 26, 2001 at 3:31 am #70820Hi,
We assumed 1.5 sigma shift in the process to include the longterm effect. Every process has a chance to have shift because of many reasons, like tool wearing, change in operators, change in environment so many reasons. You cant take six sigma project whenever the shift occurs because of lot of cost and effort inovolved. THat is why the 1.5 sigma shift assumed.
hope this helps you
thanks
sridhar0December 24, 2001 at 3:40 am #70800Hi hoon,
Cpk and Cp will be equal when your process centered. If your process is off centered then Cp will be higher than Cpk. Cp tell you that how much is your process capability. But Cpk tell you whether your process is shifted or not. The paragraph you mentioned says that keep your process at Cpk = 1.33 so that your process capability is 4 sigma when it is centerd, these one sigma extra is given to consider the shift in the process.
hope this gives you some help.
thanks
———————————————————————0December 21, 2001 at 5:33 am #70750Hi akro,
I have two papers on FMEA. One is a case study done at a Hospital i.e. how to improve the service in a restaurent in a hospital. Second one is a paper explaining about FMEA.
If you are interested send me the email i will send you the two papers.
sridhar
0December 20, 2001 at 1:33 pm #70720Hi rajesh,
Why dont you have a mean you. What ever the data points are there then definitely you have a mean. You will not have lower specification limit.
You can calculate Cpk as (USL – mean / 3*Sigma)
sridhar
—————————————————————0December 19, 2001 at 4:48 am #70674Hi suzanna,
I will be very grateful if you send the report to my mail id
[email protected]
thanks
sridhar0December 14, 2001 at 7:30 am #70548Hi stan,
If the target is centered then both Cp and Cpk equal is int it right?
———————————————————————
sridhar0December 11, 2001 at 12:33 pm #70477Hi,
Predictive modelling is nothing but you have to predict some response variables by using certain dependent parameters. I am in software so i will give you example with software process only.
Suppose take the inspection process in software. We will inspect different artifacts ( documents ) and prepared a defect log. These no.of defects may depends on many factors like size of the artifact, complexity, No.of persons inspected, experience of the inspectiors, effort spent etc. Now we will fit a model depending on the past data what should be the no.of defects for a given parameters. THis is predicted value now when you do the actual inspection we will get some other value. You can take decision whether to go for one more inspection or not.
hope this helps you.
thanks
—————————————————————0November 30, 2001 at 3:55 am #70268Hi,
I know that if your process is at six sigma your Cp is 2. It is calculated as follows.
basically Cp = (USLLSL)/ 6*stddev. When your process is at six sigma level it means that you can incorporate 12 stddev within USL and LSL
i.e Cp = 12 * Stddev / 6* stddev = 2
Cpk = 4.5*stddev / 3 * stddev = 1.5 ( taken 1.5 shift on one side )
may be you can extend this idea to other sigma levels also.
hope this helps you
thanks
——————————0November 29, 2001 at 9:59 am #70232Hi,
Please send me your mail id. I am having two case studies on Prioritizing software requirements which used AHP ( ANalytic Heirarcy process ) . It is having good explnaiton about the process also.Send me your mail id i will send you the papers.
bye
———–0November 29, 2001 at 9:38 am #70231Hi,
Normally the Process Capability should be as minimun as 1.33. IF it is more then it is well and good.
——————–0November 29, 2001 at 9:25 am #70230Hi,
Coeffecient of varation helps you to find out whether you want to consider one sigam or 3 sigma to develop control charts. THis is only recommendation. When your sigma is very high if you take 3 sigma values almost every point will fall in within control because of large distance between control limits and centre value. To avoid this we check the Coeffecient of variation and decide upon the control limits.
Hope this gives some idea
bye0November 29, 2001 at 9:21 am #70229Hi,
Taguchi loss function says that if you go away from your target values your cost increases. Same as six sigma it says that try to be centered at target and have less variation so that you can make only 3.4 defects per million. To attain six sigma you can use any method or technique etc. One of those techniques is DOE.
bye
0November 22, 2001 at 10:54 am #70114Hi,
I am not experienced this kind of situation but i try to answer it within my knowledge. Even the machines are same there may be some variation in the products what you are making may be becuase of operators, m/c parameters setting etc. So what i feel is try to do some hypothesis testing on the 12 machines i.e. sample mean and variance of all the 12 manchies are same or not. If they are same then you can take the same control limts. If not try different control limits for those machines.hope this gives you some way to find a solution…….sridhar0November 22, 2001 at 3:40 am #70108Hi ravi,
You are right. But what i mean to say that. You should take care that your control limits should always be less than or equal to Specificaiton limts to assure that your process is within control.
sridhar
————–0November 21, 2001 at 10:21 am #70088Hi
Specification Limits are set by the customers only. We will set our Control limits so that finally we achive the customer specifications limits. Control limits comes from the natural vaiation within the process. Always control limits are less than or equal to specifications limits.
bye
0November 8, 2001 at 11:51 am #69806Hi,
When you say that your process is at 3 sigma or six sigma you will say it with respect to specification limits. Now when your process is at three sigma that implies that you are able to incorporate 3 + 3 sigma variation with in the specifications. When you say it is six sigma that imples that you are able to icorporate 6 + 6 sigma within the same specifications which means that your sigma is reduced. WHich intrun reduces the no.of defects.
Hope this will help you
thanks
sridhar0October 29, 2001 at 12:20 pm #69570hi,
Basically R chart depends on the extreme Values in the data. becaue range = Min – Max.
S charts consider the Variation in the process
I feel that if you want to findout smaller shifts and you have enough sample size go for S – charts. If you are not interested in smaller shifts in the process then you can go for R chart
bye
sridhar0October 29, 2001 at 12:15 pm #69569hi ,
I dont know whether it is correct or not but i am trying to answer this.
We know that if X ~ N ( mu, sigma) then X bar ~ ( mu, Sigma / Sqrt(n)). By using this lets calculated the numericals
X ~ N( 100, 12)
n=36
Xbar ~ N ( 100, 12/Sqrt(36)) = N(100,2)
P(Xbar>104) = P((Xbar – mu )/ sigma > (104 – 100) / 2)
P( Z > 2 ) = 0.022
Probability of Sample average greater than 104 is 0.022
Hope this helps you
bye
sridhar
0October 25, 2001 at 9:34 am #69483Hi,
I am not that much experience in SPC but i am having theoritical knowledge. For the fist three ( a,b,c) you can use X bar R ( sample size is more than 2 ), or Individual Moving Range chart ( sample size 1). If you want to detect small shifts then go for CUSUM charts.
For the other three as they are attributes you can go for u, c, p or np chart . If the data is following binomial go for Np ( fixed sample ) or P( variable sample size ), If the data is following Pission go for U( variable sample size ), or C ( constant sample).
Hope this will hesp you.
bye
sridhar0 
AuthorPosts